Rise Of The Machines: Experts Look At AI, Robotics And The Law

NEW YORK -- Artificial intelligence, robots, and the law, are all changing a rapid pace. A panel of experts at a recent event at Fordham Law School discussed latest developments and signs of the limits of the law when applied to AI areas like facial recognition, automated weapons systems, and financial technology.

NEW YORK — Artificial intelligence, robots, and the law, are all changing a rapid pace. A panel of experts at a recent event at Fordham Law School discussed latest developments and signs of the limits of the law when applied to AI areas like facial recognition, automated weapons systems, and financial technology.

Panel from left: Barocas, Crootof, Felten, Johnson and Pasquale.

The event, “Rise of the Machines: Artificial Intelligence, Robotics, and the Reprogramming of the Law,” took place on 15 February.

Speakers on a panel on Ethical Programming and the Impact of Algorithmic Bias included: Solon Barocas, assistant professor, Department of Information Science, Cornell University; Rebecca Crootof, clinical lecturer in law and research scholar in law, and executive director of the Information Society Project at Yale Law School; Edward Felten, professor of computer science and public affairs, and director of the Center for Information Technology Policy at Princeton University; Kristin Johnson, law professor, affiliate, Murphy Institute of Political Economy, Tulane University Law School; and Frank Pasquale, law professor, University of Maryland Law School.

Gender Stereotyping and Other Flaws

Barocas talked about how AI is being taught sometimes to recognize words next to each other as having a certain meaning or significance, and one thing that was discovered was that this led to “very typical” gender stereotyping. So certain occupations were more often associated with certain genders.

And with facial recognition, he said, research found packages for gender recognition did much less well for men than women, but do notably bad for dark-skinned women. He said beyond just getting things wrong, sometimes there is a dehumanizing quality of an interaction with a technology that fails to recognise you as a person at all. He showed a case from a couple of years ago where Google had tagged a black person as a gorilla, erroneously identifying the person as an object.

He also talked about mistakes made by Google searches based on errors of input. For instance, a few years ago, typing in the search term “CEO” led to a full page of image results showing all white men, and in the last box at the very bottom right, one picture of a woman: Barbie, the children’s doll, in a business suit.

Yet another example was in translation technology. He gave an example of a phrase, She is a doctor, he is a nurse, translated into Turkish, which has no gender differentiation, but when translated back, the technology automatically switched the genders to say, He is a doctor, she is a nurse, based on these being more statistically common.

There are options that have been suggested on what to do about this, he said, including:

– Do nothing – because it is important to reflect reality as it is.

– Improve accuracy – the goal being to make things more accurate.

– Blacklist things we see as inappropriate – for instance in the case of the person being mistakenly tagged as a gorilla, just remove the term gorilla so that cannot happen.

– Scrub to neutral – seeking to break the associations we see as unacceptably stereotypical while maintaining those that are somehow informative. This might sever the link between occupation and gender while preserving other aspects.

– Representativeness – at a minimum, these representations should be in keeping with the current distributions of men and women in the labour force in that field, not making it worse that it actually is.

– Equal representation – which he termed “aspirational”, to show proportions we would like to have, even if it not the case.

– Cultivate a critical awareness of these issues – giving people a critical literacy about the way they are produced.

Automated Weapons and ‘War Torts’

Crootof talked about autonomous weapons systems, which are capable of independently selecting and engaging a target based on pre-programmed constraints, and are being used in the field. About 30 countries have this capability now, she said. These are not drones, which are semi-autonomous weapons systems with a human making the decisions.

Like any new technology, it raises a number of concerns, such as morality, strategic, security, and legal issues. She particularly focused on accountability problems. AWS create a break in the chain between the human element in force and when that force is exercised, she said, which raises a question of responsibility. “Who or what should be responsible when there’s a malfunction?” she asked. With such complexity there are going to be accidents. There could also be intentional misuse, or conventional use for bad purposes, and there is also risk of cybersecurity issues, she said.

Crootof proposed a new solution: “war torts”. She noted that criminal law assigns guilt for blameworthy acts, while tort law assigns liability for accidents. In tort law, the goals are to: minimize hard of necessary but harmful activities, compensate those unfairly harmed, reduce accidents through ex ante incentives to avoid them.

Law has several responses to “scary tech,” she said. These include: Ban it, wait and see, and regulate.

  1. Ban it: if you don’t have the new technology, then you don’t have worry about the problem it causes, she said. This works for some technologies, there have been a number of successful weapons bans in the past, but success is dependent on tech. For automated weapons systems (AWS), a successful ban is unlikely.

Crootof listed “factors that increase the likelihood of a successful international weapons ban”:

“Cause superfluous injury or suffering (in relation to prevailing standards of medical care); inherently indiscriminate; is or is perceived to be sufficiently horrible to spur civil society actions; the scope of the proposed regulation is clear and narrowly tailored; not already in use; other means exist / are nearly as effective at accomplishing a similar military objective; not novel – the weapon is easily analogized to others or its usages and effects are well understood; it or similar weapons have been previously regulated; strong multi-state commitment to enacting regulations; and there would be identifiable violations.”

The only factor that weighs in favour of an AWS ban is civil society engagement, she said. But for AWS, Crootof said she does not see a ban as very likely.

  1. Wait & See: Here one would use analogies to stretch existing law to address new technologies, which she said works for some problems, for instance for intentional misuse, but for AWS, the better the analogy the less helpful the law, and analogies are misleading and limiting. For instance, looking at weapons, the law assumes they cannot act on their own. And combatants are not applicable because, for instance, humans cannot be hacked. With child soldiers, they are lethal fighters and we don’t hold them accountable for their actions. But the law is not helpful here because we ban child soldiers in order to protect children.

“We are not going to ban autonomous weapons systems to protect robots,” she noted. Animal combatants, such as in the past, when elephants, pigeons, bats, or dogs have been used? No, they are not autonomous entities, and there are no laws against animal combatants. AWS is autonomous so does not apply.

“It seems good until you look at the law. There is no international law on this,” she said.

  1. Create a new law/Regulate: Create new regulations to address new problems – this works for some problems, but can be difficult to pull off, she said. In an example, she noted a war crime is a violation of international humanitarian law done wilfully and intentionally. Autonomous weapons may take an action that looks like a war crime, like bombing a hospital, or razing a village, but who is accountable? The programmer, manufacturer, etcetera cannot fairly be held accountable because they may have acted with the best of intentions, without wilful, prior intention.

So it needs a new framing, she said. We should not look at who do we hold morally responsible or accountable. Instead, we ask what is the appropriate liability to minimize the possibility that these acts are going to occur in the first place. So that is like tort law, not criminal law, leading Crootof to argue for using a “tort law lens” to look at the problem. There could be overlap, but while criminal law finds blame for the acts, tort law is often assigned for accidents. Ultimately the solution will depend on the problem, and may be any of the above, she concluded.

Controlling Systems We Don’t Understand

Felten said there is a lot of discourse about how to provide transparency, accountability and governance in AI systems. He looked at it from a computer science perspective. Computer scientists have thought for a long time about how to prevent undesirable outcomes, he said, often when a lot is at stake with extremely complex systems, like an engineer working on safety-critical systems, making sure it doesn’t hurt anyone, making sure with a system that is handling private data, preventing that data from leaking, or making sure that systems involved in making important administrative decisions or ensuring compliance or guarding people’s money or assets are behaving as they should.

Artificial intelligence does perhaps raise the stakes on this challenge, he said, but it does not change the underlying problem. For computer scientists there is not a single method or approach. There is a “constellation” of different approaches trying to ensure reliable, accountable behaviour, and those different approaches are used together.

He described three approaches computer scientists tend to use on these problems.

First is transparency. In this approach, you publish the code of your system or publish the code of the data you are working with, and then allow experts to look at and study that code. This can be used to determine that some bad outcome can never happen or will never happen. But it does not tell you everything you might want to know about what the system will actually do when you turn it on. There is a deep theoretical result that says that inspecting or analysing code can never give you complete answers to all the questions you want to know about what that code will do. There are fundamental impossibility results and practical limitations on what you can learn simply through transparency.

Second is to provide hands-on access to the system, Felten said. For example, auditors might be allowed to interact with the system. If you have systems that are making decisions about some resource or benefit, you might give access to this system to auditors and they might make up hypothetical people, feed them into the system, and see what happens.

Testing is good for telling if something can happen, he said, but it cannot tell you what would happen in a case you didn’t test. And for any interesting system, the number of possible scenarios would be vastly astronomically larger than you can test for. So testing is a way to look at a tiny fraction to see if a situation might come up. So testing can tell you if something can happen, but it cannot tell you if a bad outcome cannot happen. And testing is particularly ill-suited in cases where fairness is the problem to be solved, because fairness is about a comparison across different scenarios, asking what would happen to different people in different scenarios.

Third is doing due diligence on the engineering process, according to Felten. What is asked is what kind of care went into designing it, what kind of process went into making the design decision, what assurance did the engineering team try to provide. Just like in other studies you want to ensure got a good result, looking at the process is important. But this tool is also fundamentally limited. Computer science and software are still so immature it is hard to know what process you could follow to be sure you get to good results, he said. And it could be hard to know what the goal is.

The bad news is that it is very hard to understand with very high confidence what a complex system will do, he said.

The good news is there are methods out there to help to control its behaviour even if you don’t fully understand it. The concept is to take a complex, unruly system and put it inside of some kind of wrapper that will govern its behaviour. For instance, if you want a car to not be able to go above 25 miles per hour, you could put a governor on the system that measures the speed of the wheels and they are are going above 25 miles per hour, it cuts off the flow of fuel to the engine.

“The beauty of this is you can control the behaviour of a thing even if you don’t understand the core thing it is wrapping around,” Felten said.

A system can be designed that has “handles” or “ports” that can be used to look into it to better understand what is doing, to design the system to report to you in terms you can understand in terms that relate to your goals for the system, he explained. For instance, as a programmer can provide hints to the user on why they have designed in certain ways, a system can be designed to make analysis easier.

So it is important, he said, to think carefully about the interface with these systems, between the technical parts of the system and the mechanisms of governance, the management of it. This should be thought about in the design rather than trying to “bolt it on” after the fact. For example, building a system and then trying to think about privacy or cybersecurity after the fact never seems to work.

Finally, he compared thinking about an AI making decisions to a bureaucracy full of people making decisions. “We don’t just let the people in that building do what they want to do,” he said. There is a design, rules of transparency and accountability, administrative law, and other mechanisms that reflect the wisdom gathered over many years of how to design these organisations.

The Newest ‘Jim Crow’ Law

Johnson and Pasquale talked about categorising problems and solutions in finance, the promise of AI in finance, and avoiding discriminatory lending practices by AIs.

Johnson showed a debt chart showing that household debt has risen 21 percent since the financial crisis, reaching into the trillions of dollars, yet access to fair credit has dropped for the poorest sector. “It can be expensive to be poor,” she said, noting that access to credit is critical. She explained how financial technology can increase biases, and said data can be seen as “creepy” or “predatory”.

She showed a bar chart with colour coding that showed that those with poor credit have been losing access to credit, while there is an increase in predatory behaviour. Rather, the aim should be greater financial inclusion, she said.

There is a rise in “buy now, pay later” point of sale options, said Johnson, but those without credit cannot take advantage of them. There is a concern that financial tech firms might perpetuate the current economic divisions and the limits of the poorest to access credit, she said.

These digital lending platforms have won favour with the current US administration, which in July 2018 issued national banking charters for these fintech platforms offering consumer debt, she said. This creates great concern, she said, because of the harm they may result.

In their discussion, Johnson and Pasquale referred to a website called “Will a robot take my job?”, and several books: “How the Other Half Banks”, “Loan Sharks: The Birth of Predatory Lending”, and “Poverty of Privacy Rights”.

Pasquale talked further about bias in the system. He asked participants to think about the algorithms behind financial tech systems. For instance, he said in a case of a person who was desperate and took credit with a high APR, paid it back but at great cost to his family, it matters if that is coded as a success or failure. Care must be taken whether to tell the AI if that is a success because it will then target her again for a loan.

Pasquale described a situation in which a person cannot access credit because of a “thin credit file.” A new fintech firm gives her a chance to obtain credit on good terms, if she allows it to download and analyse all of the information on her mobile phone during the term of the loan and resell the data. The person takes the loan and makes the repayments on time. But she does not know what is done with her data and cannot find out due to trade secrecy. This is already being deployed in real life, he said.

Discussion

In questions and answers, Felten said the mistakes systems make are different than the type humans make, for instance thinking a small car with a large shadow is a tank when a human would have been able to make the distinction.

Crootof warned that assumptions are being made, and recounted a story from World War II in which Soviets trained dogs to carry bombs and run under tanks to blow them up, but the Nazi tanks used gasoline and the dogs were trained with Soviet tanks that used diesel and so turned around and went back under the more familiar-smelling Soviet tanks.

Barocas said people enter search terms online that reflect their interest or bias, for instance, anti-vaxxers or Holocaust deniers.

Crootof restated that for choosing law, stretching law to fit new situations, the key is focusing on the particular problem. There will be cases where law runs out, and then you have regulation, she said.

Johnson noted that just the previous week, the US Federal Consumer Protection Bureau rolled back anti-predatory laws.

Felten mentioned two issues – that the system is rejecting applicants because of the inaccurate prediction that that they will not repay credit, and rejecting accurately that they cannot pay back, but misses other societal values of giving the loan.

Johnson called it the “newest Jim Crow law,” referring to past state and local laws in the United States in place to enforce racial segregation.

 

Image Credits: William New

2 Comments

  1. I changed my password thinking that was the problem to log in… I received a very complicated new password but it again does not let me open the articles

Leave a Reply

Your email address will not be published. Required fields are marked *