This is where we are at in 2017: sophisticated algorithms are both predicting and helping to solve crimes committed by humans; predicting the outcome of court cases and human rights trials; and helping to do the work done by lawyers in those cases. By 2040, there is even a suggestion that sophisticated robots will be committing a good chunk of all the crime in the world. Just ask the toddler who was run over by a security robot at a California mall last year.
How do we make sense of all this? Should we be terrified? Generally unproductive. Should we shrug our shoulders as a society and get back to Netflix? Tempting, but no. Should we start making plans for how we deal with all of this? Absolutely.
Fear of Artificial Intelligence (AI) is a big theme. Technology can be a downright scary thing; particularly when its new, powerful, and comes with lots of question marks. But films like Terminator and shows like Westworld are more than just entertainment, they are a glimpse into the world we might inherit, or at least into how we are conceiving potential futures for ourselves.
Among the many things that must now be considered is what role and function the law will play. Expert opinions differ wildly on the likelihood and imminence of a future where sufficiently advanced robots walk among us, but we must confront the fact that autonomous technology with the capacity to cause harm is already around. Whether it’s a military drone with a full payload, a law enforcement robot exploding to kill a dangerous suspect or something altogether more innocent that causes harm through accident, error, oversight, or good ol’ fashioned stupidity.
There’s a cynical saying in law that “wheres there’s blame, there’s a claim”. But who do we blame when a robot does wrong? This proposition can easily be dismissed as something too abstract to worry about. But let’s not forget that a robot was arrested (and released without charge) for buying drugs; and Tesla Motors was absolved of responsibility by the American National Highway Traffic Safety Administration when a driver was killed in a crash after his Tesla was in autopilot.
While problems like this are certainly peculiar, history has a lot to teach us. For instance, little thought was given to who owned the sky before the Wright Brothers took the Kitty Hawk for a joyride. Time and time again, the law is presented with these novel challenges. And despite initial overreaction, it got there in the end. Simply put: law evolves.
The role of the law can be defined in many ways, but ultimately it is a system within society for stabilising people’s expectations. If you get mugged, you expect the mugger to be charged with a crime and punished accordingly.
But the law also has expectations of us; we must comply with it to the fullest extent our consciences allow. As humans we can generally do that. We have the capacity to decide whether to speed or obey the speed limit – and so humans are considered by the law to be “legal persons”.
To varying extents, companies are endowed with legal personhood, too. It grants them certain economic and legal rights, but more importantly it also confers responsibilities on them. So, if Company X builds an autonomous machine, then that company has a corresponding legal duty.
The problem arises when the machines themselves can make decisions of their own accord. As impressive as intelligent assistants, Alexa, Siri or Cortana are, they fall far short of the threshold for legal personhood. But what happens when their more advanced descendants begin causing real harm?
A guilty AI mind?
The criminal law has two critical concepts. First, it contains the idea that liability for harm arises whenever harm has been or is likely to be caused by a certain act or omission.
Second, criminal law requires that an accused is culpable for their actions. This is known as a “guilty mind” or “mens rea”. The idea behind mens rea is to ensure that the accused both completed the action of assaulting someone and had the intention of harming them, or knew harm was a likely consequence of their action.
So if an advanced autonomous machine commits a crime of its own accord, how should it be treated by the law? How would a lawyer go about demonstrating the “guilty mind” of a non-human? Can this be done be referring to and adapting existing legal principles?
Take driverless cars. Cars drive on roads and there are regulatory frameworks in place to assure that there is a human behind the wheel (at least to some extent). However, once fully autonomous cars arrive there will need to be extensive adjustments to laws and regulations that account for the new types of interactions that will happen between human and machine on the road.
As AI technology evolves, it will eventually reach a state of sophistication that will allow it to bypass human control. As the bypassing of human control becomes more widespread, then the questions about harm, risk, fault and punishment will become more important. Film, television and literature may dwell on the most extreme examples of “robots gone awry” but the legal realities should not be left to Hollywood.
The worst jobs for your health
The worst jobs for your health
1/10 10. Surgical and medical assistants, technologists, and technicians
Overall unhealthiness score: 57.3 What they do: Assist in operations, under the supervision of surgeons, registered nurses, or other surgical personnel and perform medical laboratory tests. Top three health risks: 1. Exposure to disease and infections: 88 2. Exposure to contaminants: 80 3. Exposure to hazardous conditions: 69
2/10 9. Stationary engineers and boiler operators
Overall unhealthiness score: 57.7 What they do: Operate or maintain stationary engines, boilers, or other mechanical equipment to provide utilities for buildings or industrial processes. Top three health risks: 1. Exposure to contaminants: 99 2. Exposure to hazardous conditions: 89 3. Exposure to minor burns, cuts, bites, or stings: 84
3/10 8. Water and wastewater treatment plant and system operators
Overall unhealthiness score: 58.2 What they do: Operate or control an entire process or system of machines, often through the use of control boards, to transfer or treat water or wastewater. Top three health risks: 1. Exposure to contaminants: 97 2. Exposure to hazardous conditions: 80 3. Exposure to minor burns, cuts, bites, or stings: 74
4/10 7. Histotechnologists and histologic technicians
Overall unhealthiness score: 59.0 What they do: Prepare histologic slides from tissue sections for microscopic examination and diagnosis by pathologists. Top three health risks: 1. Exposure to hazardous conditions: 88 2. Exposure to contaminants: 76 3. Exposure to disease and infections: 75
5/10 6. Immigration and customs inspectors
Overall unhealthiness score: 59.3 What they do: Investigate and inspect people, common carriers, goods, and merchandise, arriving in or departing from the US or between states to detect violations of immigration and customs laws and regulations. Top three health risks: 1. Exposure to contaminants: 78 2. Exposure to disease and infections: 63 3. Exposure to radiation: 62
6/10 5. Podiatrists
Overall unhealthiness score: 60.2 What they do: Diagnose and treat diseases and deformities of the human foot. Top three health risks: 1. Exposure to disease and infections: 87 2. Exposure to radiation: 69 3. Exposure to contaminants: 67
7/10 4. Veterinarians, veterinary assistants, and laboratory animal caretakers and veterinary technologists and technicians
What they do: Diagnose, treat, or research diseases and injuries of animals and perform medical tests in a laboratory environment for use in the treatment and diagnosis of diseases in animals. Top three health risks: 1. Exposure to disease and infections: 81 2. Exposure to minor burns, cuts, bites, or stings: 75 3. Exposure to contaminants: 74
8/10 3. Anesthesiologists, nurse anesthetists, and anesthesiologist assistants
Overall unhealthiness score: 62.3 What they do: Administer anesthetics or sedatives during medical procedures, and help patients in recovering from anesthesia. Top three health risks: 1. Exposure to disease and infections: 94 2. Exposure to contaminants: 80 3. Exposure to radiation: 74
9/10 2. Flight attendants
What they do: Provide personal services to ensure the safety, security, and comfort of airline passengers during flight. Greet passengers, verify tickets, explain use of safety equipment, and serve food or beverages. Top three health risks: 1. Exposure to contaminants: 88 2. Exposure to disease and infections: 77 3. Exposure to minor burns, cuts, bites, or stings: 69
10/10 1. Dentists, dental surgeons, and dental assistants
Overall unhealthiness score: 65.4 What they do: Examine, diagnose, and treat diseases, injuries, and malformations of teeth and gums. May treat diseases of nerve, pulp, and other dental tissues affecting oral hygiene and retention of teeth. May fit dental appliances or provide preventive care. Top three health risks: 1. Exposure to contaminants: 84 2. Exposure to disease and infections: 75 3. Time spent sitting: 67
So can robots commit crime? In short: yes. If a robot kills someone, then it has committed a crime (actus reus), but technically only half a crime, as it would be far harder to determine mens rea. How do we know the robot intended to do what it did?
For now, we are nowhere near the level of building a fully sentient or “conscious” humanoid robot that looks, acts, talks, and thinks like us humans. But even a few short hops in AI research could produce an autonomous machine that could unleash all manner of legal mischief. Financial and discriminatory algorithmic mischief already abounds.
Play along with me; just imagine that a Terminator-calibre AI exists, and that it commits a crime (let’s say murder) then the task is not determining whether it in fact murdered someone; but the extent to which that act satisfies the principle of mens rea.
But what would we need to prove the existence of mens rea? Could we simply cross-examine the AI like we do a human defendant? Maybe, but we would need to go a bit deeper than that and examine the code that made the machine “tick”.
And what would “intent” look like in a machine mind? How would we go about proving an autonomous machine was justified in killing a human in self-defence or the extent of premeditation?
Let’s go even further. After all, we’re not only talking about violent crimes. Imagine a system that could randomly purchase things on the internet using your credit card – and it decided to buy contraband. This isn’t fiction; it has happened. Two London-based artists created a bot that purchased random items off the dark web. And what did it buy? Fake jeans, a baseball cap with a spy camera, a stash can, some Nikes, 200 cigarettes, a set of fire-brigade master keys, a counterfeit Louis Vuitton bag and 10 ecstasy pills. Should these artists be liable for what the bot they created bought?
Maybe. But what if the bot “decided” to make the purchases itself?
Even if you solve these legal issues, you are still left with the question of punishment. What’s a 30-year jail stretch to an autonomous machine that does not age, grow infirm or miss its loved ones? Unless, of course, it was programmed to “reflect” on its wrongdoing and find a way to rewrite its own code while safely ensconced at Her Majesty’s leisure. And what would building “remorse” into machines say about us as their builders?
What we are really talking about when we talk about whether or not robots can commit crimes is “emergence” – where a system does something novel and perhaps good but also unforeseeable, which is why it presents such a problem for law.
AI has already helped with emergent concepts in medicine, and we are learning things about the universe with AI systems that even an army of Stephen Hawkings might not reveal.
The hope for AI is that in trying to capture this safe and beneficial emergent behaviour, we can find a parallel solution for ensuring it does not manifest itself in illegal, unethical, or downright dangerous ways.
At present, however, we are systematically incapable of guaranteeing human rights on a global scale, so I can’t help but wonder how ready we are for the prospect of robot crime given that we already struggle mightily to contain that done by humans.
Christopher Markou is a PhD candidate (faculty of law) at the University of Cambridge. This article first appeared on The Conversation (theconversation.com)Reuse content