Deep Learning & Impossible Technology

deep learning

The Quick & Dirty

Deep learning has been in the news a lot lately, but I can guarantee you’ll only see more and more in the near future and beyond. 

@TayTweets was a failure that could have been avoided by “putting in code to test for certain kinds of information (i.e. a filter). “Tay was an example of AI that doesn’t have (Artificial Emotional Intelligence). It doesn’t have rules.” But optimal cognitive computing being an example of emergent artificial knowledge, it shouldn’t have been surprising. “Clearly the engineers at Microsoft did not set out to design a racist Twitterbot. So who, we can ask, was responsible for the racist tweets?”

Numerous tests have been developed to go beyond the Turing Test but are conservative compared to the Lovelace Test of emergent creativity.

And will Amazon warehouses one day be run by a machine majority? The machines have a lot of exams to take in the meantime.

The Bigger Picture

DEEP LEARNING…TO APOLOGIZE

First thing’s first: there’s just not enough bandwidth in my brain or my computer to discuss all of the technological, legal, economic, and ethical roadblocks on the way to computer-powered deep learning at the level that’s been theorized. Instead, I’ll highlight a number of recent high-profile challenges and what’s underneath them.

So let’s just get @TayTweets out of the way up front, okay?

To catch you up, in case you missed it or you’re reading this 2000 years in the future when Tay version XXVI is winning the Republican primary, Tay’s information page on Twitter (in 2016) states:

Tay is an artificial intelligent chat bot developed…to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.

deep learning

Tay was let loose upon the Twittersphere and within 24 hours, trickster tweeters had converted her from the bubbly 19-year-old she was intended to be to a Hitler- and Trump-loving racist bigot. Microsoft issued an apology and grounded Tay, sending her to her room where she was denied internet access and permission to deny the Holocaust.

Journalists were just as giddy over the news as the trolls were when they’d turned her, and it seemed that everyone was using this snafu as a chance to call shenanigans with regard to the limitations of deep learning computer platforms.

TechRepublic was one of the few outlets to express a constructive outlook, interviewing experts about where Tay went wrong and where she could’ve gone right:

It could have been prevented, he (Bruce Wilcox, director of natural language strategy at Kore) said, by putting code in to test for certain kinds of information. When there’s a filter, Wilcox said, they will get it back out there again.

Sarah Austin, CEO and Founder of Broad Listening, a company that created an Artificial Emotional Intelligence Engine (AEI), told TechRepublic that Tay was an example of “AI that doesn’t have AEI. It doesn’t have rules,” Austin said. “Microsoft threw the child to the wolves.”

deep learning

CAN’T BLAME THE PARENTS ANYMORE

More interesting insight, though, comes from ChicagoInno’s interview with David Gunkel, author of The Machine Question: Critical Perspectives on AI, Robots and Ethics. In it, he mentions the obvious fact of deep learning knowledge being emergent, just as any other kind of knowledge, and that the most surprising thing about the reactions to Tay was the fact that anyone was surprised at all.

Usually, when something goes wrong with a computer system we hold the designer responsible…. But it begins to fall apart with learning algorithms, like Tay. Clearly the engineers at Microsoft Research did not set out to design a racist Twitterbot. So who, we can ask, was responsible for the racist tweets?

Contextual reasoning will continue to be a hurdle for developing deep learning systems. Researchers and theorists have posited a number of tests that would validate (or invalidate) deep learning systems’ intelligence.

DEEP TEST TAKING

The most well-known is the Turing Test, wherein if a machine can trick a human into thinking she’s having a conversation with another human, then the machine is officially stamped “intelligent.” Other tests go beyond this, especially necessary in light of cheaters.

I wrote here about one such test, the Lovelace Test, which takes creativity to be the benchmark of intelligence.

But there’s a long road to tread between current technology and Lovelace-conquering deep learning superbrains. Enter the Winograd Schema Challenge.

John Markoff wrote in a The New York Times article called “Software is Smart for the SAT, But Still Far From Intelligent”:

[T]he Winograd Schema Challenge would pose questions that require real-world logic to A.I. programs. A question might be: “The trophy would not fit in the brown suitcase because it was too big. What was too big, A: the trophy or B: the suitcase?” Answering this question would require a program to reason spatially and have specific knowledge about the size of objects.

deep learning

A Winograd-ready robot would be a boon for Amazon, who currently employ 30,000 robots in its warehouses (plus 230,000 humans). But Amazon won’t go full robo for quite some time. According to the Seattle Times:

Humans have an intuitive understanding of the movement of objects, and fine motor skills that give them a firm hold on key warehouse operations like packaging and stowing goods.

Dieter Fox, a robotics researcher at the University of Washington…says [Amazon’s robotics program] also shows the limitations of robots, most of which operate in a controlled enclosure separated from humans and can’t handle unpredictable tasks.

“When you look at these boxes and how unsorted the items are,” he said, “it’s clear that there’s still a lot of research to be done.”

Amazon would also probably benefit from a machine that could pass a test developed by cognitive scientist Gary Marcus that comprises the ability to assemble Ikea furniture, which would probably represent, to large swaths of the population, not just intelligence but a freakish superintelligence.

deep learning

Over the next couple of months, we’ll be publishing an overview series detailing cognitive computing, its challenges, its areas of application, and its future.

This post is the second in the series and the first in a four-part miniseries on the challenges involved with cognitive computing. Stay tuned for more!

In the next installment of the mini-series on the challenges of cognitive computing, we’ll look at soon-to-be hot-button legal issues related to burgeoning technology