Machine Learning Goes to Jail

artificial intelligence jail

The Quick & Dirty

Will humanity and machine learning ever combine?

Will it ever be legal to replace healthy limbs with superhuman prosthetics?

Who’s liable in a collision between two self-driving cars?

In the R&D of lethal autonomous weapons systems, what would we do if, say, ISIS learned how to 3D-print such weapons en masse?

Can Scarlett Johansson sue ScarBot?

Is destroying hitchBot the legal equivalent of smashing a car, killing a dog, or killing a child?

Short answer: We’re not really sure on any of the above.

The Bigger Picture

THE SHORT HISTORY OF ROBOT LAW

One of the biggest legal issues with machine learning systems entering the marketplace is one of classification.

Amid the eventual advent of mega-strong prosthetics and other such terrifying transhumanist technologies, should it be legal for someone with perfectly good arms and legs to amputate and upgrade?

When a surgeon is aided in the operating room by a machine learning robo-surgeon, who would be at fault for something going wrong? Legal precedent seems fungible in this regard. A recent Atlantic article called “A Brief History of Robot Law” pointed to some interesting cases:

The SS Central America, a paddle wheel steamboat, had gone down in a hurricane in 1857, loaded with gold from the California Gold Rush… But in 1988, a robotic submersible operated by an outfit called the Columbus-America Discovery Group finally found the wreck roughly 200 miles off the Georgia coast.

The controversy here was whether Columbia-America had a legitimate claim on the gold, and for the first time, courts ruled in favor of “telepossession.”

But the article goes on to say that:

[Courts have] considered whether robots can be considered performers for the purposes of levying an entertainment tax (no, at least not the case of the animatronic animals that alternately entertain and frighten children at Chuck E. Cheese restaurants).

machine learning

Even though Vanna White successfully won her suit against Samsung in 1994 over a robotic likeness of her, which might hearten Scarlett Johansson depending on the commercial fate of Mark 1, aka ScarBot.

machine learning

PRESSING LEGAL MATTERS FOR MACHINE LEARNING

One of two of the most pressing, real-world (read: actually important) legal hurdles on the near horizon for machine learning involves the emergence of driverless cars. In the instance of an accident between two driverless cars, for example, who is at fault? Producer A or Producer B? Passenger A or Passenger B?

Yueh-Hsuan Weng, a research associate and co-founder of the RoboLaw.Asia Initiative at Peking University, aims to give machine learning robots a legal status referred to as “Third Existence” that would extend protections to our chrome-lined allies.

Weng’s intention is to protect machine learning robots as if they were humanoid pets, essentially. If someone beats your dog to death, the consequences of cruelty laws await that jerk, which is beyond the destruction of property.

If you sent your personal assistant robot out for a six-pack and it never came back because it encountered hitchBot destroying rednecks, in Weng’s opinion they deserve more severe punishment than a guy that takes a baseball bat to your car.

machine learning

But flip this scenario on its head. If your dog bites the face off of a bubbly toddler, you’re in deep dump. So what if the machine learning robot you just had drone-delivered to your space pod does the same thing? Who in this instance is liable?

This analogy should help in clearing up the smart car liability issue, but it doesn’t. It sets a weird legal precedent of protection for companies tossing this technology out into the world, putting consumers at a disadvantage.

The Guardian tried to tackle this problem in 2013:

One question is whether it’s time to rethink liability to ensure safety and justice without compromising the incentive for companies to develop the technology – “for instance, through the usage of compulsory insurance schemes or by assessing so-called ‘safe harbours’ to shield, in some cases under certain conditions, the liability of the producer of the car.”

Your pitbull is not downloading software updates from the cloud, but a driverless car is a machine learning computer, and your computer doing things against your will shouldn’t fall on you to atone.

These legal problems are being discussed between technology and automotive companies and the National Highway Traffic Safety Administration with Google helming the discussion. Reuters reported on the parlay and quoted from a lengthy letter from the NHTSA to Google in response to its expressed plans for autonomous vehicles:

NHTSA will interpret ‘driver’ in the context of Google’s described motor vehicle design as referring to the (self-driving system), and not to any of the vehicle occupants.

Whew!

Lawmakers have reportedly sworn to expedite new legislation allowing autonomous vehicles into the marketplace, but Google is concerned about the state-by-state approach, considering the effect of, say, traveling from a state that doesn’t require a brake pedal and steering wheel into one that does.

machine learning

AND NEXT, THE END OF HUMANITY

But driverless cars is small potatoes compared to what else is on the table. From the Daily News:

LAWS (lethal autonomous weapons systems), also commonly known as ‘Killer Robots,’ is being broadly categorised as an emerging type of autonomous technology with potential use in lethal weapons systems, that will once activated, have the ability to select, engage, and use force at targets, without any human intervention.

Such systems are coming under scrutiny just ahead of their development and deployment. The concerns include whether to ban the R&D, of course, but such an injunction would be pointless considering how often superpowers like the US, Russia, and China, inter alia, scoff at international law. Just the suspicion that one country is tinkering in the basement would lead to a hasty underground retreat.

But this concern is less about governments who are going to fight anyway and more about such technology falling into the wrong hands.

The world has already seen the havoc that can be wreaked by fundamentalist terrorists who see their lives as secondary to duty (and loads of postmortem virgins). If a lone bin Laden had LAWS pumping out of a 3D printer in the desert, he’d hardly need to do much brainwash-recruiting to achieve his aims. And what of a Ted Kaczynski? After all, gun control is already out the window with the advent of 3D printed pistols.

The other concern is not “Yes or No” but “How much?” as in, How autonomous should LAWS be allowed to be?

These questions are currently being discussed at the Convention on Certain Conventional Weapons in Geneva. Time will tell, but it’s definitely worrisome.

machine learning

Over the next couple of months, we’ll be publishing an overview series detailing cognitive computing, its challenges, its areas of application, and its future.

This post is the third in the series and the second in a four-part miniseries on the challenges involved with cognitive computing. Stay tuned for more!

Join us next week as we ponder the economic upheavals to come and consider the possible dawn of the post-work economy.