Cognitive Content Goes to the Movies

By BRADY EVAN WALKER

WHAT CAN ARTIFICIAL INTELLIGENCE DO?

So AlphaGo can kick butt at an ancient, highly technical, and complex game; iRobot finally made a mop that can catch corners; and a newly refined technology can call out lying waiters everywhere.

It seems that specialized cognitive computing will soon find a home under any knowledge umbrella. But don’t waste all your money on bottled water and canned goods prepping for the Robot Overlord Apocalypse quite yet.

As Jean-Christophe Baillie, founder and president of Novaquark, a Paris-based virtual reality startup, points out here, artificial generalized intelligence still lacks the uniquely human quality of intrinsic motivation and curiosity. Which means it can’t operate a gun until you tell it how.

And it can’t talk like a real person without either explicit instructions or a learning model in place. Cognitive content will soon change all of that.  Enter stage-left: Google’s chatbot, a machine-learning conversationalist. Cade Metz, writing for

Enter stage-left: Google’s chatbot, a machine-learning conversationalist. Cade Metz, writing for Wired, wrote about what differentiates Google’s chatbot from the pack:

There wasn’t a team of software engineers who meticulously coded the bot to respond to certain questions in certain ways. Google researchers Oriol Vinyals and Quoc Le built a system that could analyze existing conversations—in this case, movie dialogue—and teach itself to respond.

Plenty of people are impressed with the results, though it’s still not even to the uncanny valley, “a hypothesis in the field of aesthetics which holds that when features look and move almost, but not exactly, like natural beings, it causes a response of revulsion among some observers.”

Once you become acclimated to the pace of technological development, you shouldn’t be surprised to see movie-dialogue-spouting smart phones, smart refrigerators, smart thermostats, and smart cars saying that frankly, they don’t give a damn.

BUT WHAT HAPPENS WHEN…

We map the journey of every main character on an emotional level … Are they angry in the beginning? Do they have some kind of redemption? We really want to see how much subtext there is in the dialogue. It’s really deep analysis [of screenplays] and we’ve taught computers to do it. It’s all automated; no human input.

Oh, wait. That’s done. The above is a quote from Nadira Azermai from a profile in International Business Times of her company ScriptBook, which, among other things, can rewrite screenplay dialogue. And hers isn’t the only company to tackle Hollywood flops with cognitive computing (or attempts at cognitive computing).

Does this mean Google and ScriptBook are building an accidental feedback loop?

WHAT DO YOU MEAN, FEEDBACK LOOP?

Consider how deep learning models like AlphaGo or ScenGen (scenario generator) work. AlphaGo digested 100,000 human-played games of Go before it went off to its corner to play millions of games against itself, refining what it learned from humans beyond any capacity an aspiring Go player could manage.

ScenGen doubles this model, exponentially expanding the potential by letting two scenario generators compete in order to churn out a fairly complete picture of everything that could happen in a given scenario. Like for instance absolutely anything that could happen in war.

To think of this Google chatbot-to-Scriptbook cognitive content feedback loop, it may look on the surface, as it did to me when I first put these two together, that eventually we’ll be feeding screenplays written by machines into machines created to speak machine-moviespeak, ad nauseam, ad infinitum, but that ignores the overwhelming human element, the brass ring of cognitive content: creativity.

CAN A COMPUTER BE ACTUALLY CREATIVE?

This is why researchers in the field have transitioned away from the Turing Test as the benchmark for artificial intelligence, wherein if a machine could convincingly converse with a human who couldn’t distinguish the conversation from one with a real person, then it could be considered “intelligent.”

To replace “The Imitation Game” (like the movie that was probably fed into Google’s chatbot), researchers developed The Lovelace Test, named after mathematician Ada Lovelace, not Linda Lovelace. As described on Motherboard:

An artificial agent, designed by a human, passes the test only if it originates a “program” that it was not engineered to produce. The outputting of the new program—it could be an idea, a novel, a piece of music, anything—can’t be a hardware fluke, and it must be the result of processes the artificial agent can reproduce. Now here’s the kicker: The agent’s designers must not be able to explain how their original code led to this new program.

In short, to pass the Lovelace Test a computer has to create something original, all by itself.

Which makes complete sense if you understand creativity as the highest form of intelligence, as stated in Bloom’s Taxonomy of Knowledge, sketched out below:

Lowest Level: Recall, being able to list facts.

Fifth Level: Understanding, an ability to explain, to make meaning.

Fourth Level Up: Applying, an ability to implement, to use procedures.

Third Level: Analyzing, an ability to differentiate, organize, and categorize.

Penultimate Level: Evaluating, ability to critique and make recommendations.

The Ultimate Level of Human Thought: creating, an ability to generate something new.

Unlike scenario generators and deep-learning platforms, the principles of art and the rules of conversation are soft, context-specific, and ever-shifting. A cognitive content service like ScriptBook needs a human-penned script to analyze because it can’t create from thin air. And a chatbot like Google’s only scratches the surface of the nuance in human interaction and would certainly lack wit, insight, or empathy.

NASCENT COGNITIVE CONTENT CREATIVITY

As I write this, news has dropped of an AI-penned novel (or what we here at Persado call cognitive content, if you haven’t guessed that already) entering the finals for a Japanese literary award with its book, The Day A Computer Writes A Novel. As the L.A. Times reports:

Teams of writers worked with an AI program to create the cyborg novels. The level of human involvement in the novels was about 80%, one of the professors who worked on the project said.

However, the computers did the hard work — actually writing the text.

Though the L.A. Times writer doesn’t seem to think so, it’s debatable whether the hard work of creative writing is in the planning or execution stages. It seems that all the Japanese computer accomplished was, at best, the fourth level of intelligence, analyzing, since it worked off of previously written novels and a set of plot points, settings, and characters that researchers fed it.

Persado goes one better than Google’s chatbot. Persado’s framework allows it to assess and adapt its word choice and syntax. The digital ad click registers the quality of a successful communication, which success (or failure), when set against gazillions of digital interactions, gives a statistical base for a machine to learn. Testing is something that, as far as I can tell, Google’s chatbot lacks. And unfortunately, Microsoft’s chatbot Tay is a tad too impressionable..

In other words, Persado cognitive content platform can deduce, with mathematical certainty, the human response to an ungodly large bucket of vocab. While it’s not creative in the Lovelace sense, it’s highly responsive, the closest a machine can yet get to empathy because it uses the deepest sentiment analysis software out there. In other words, a click demonstrates genuine interest, a sincere response. Tweeting racist, sexist stuff at Tay does not demonstrate genuine conversation.

There’s a canyon-like distance between conveying a cogent message and communicating effectively. Think Donald Trump in the ring with Dale Carnegie. And this is where Google’s chatbot is, so far, destined to fall short.

While a true act of creation by machine brains seems impossibly distant, Persado is able today to generate miles of communication possibilities with the critical know-how to narrow down to the very best.  

Chatbot does have its advantages, though. For instance, if you ask nicely, it may tell you that a Quarter Pounder with cheese in Paris is called a Royale with cheese. Just don’t ask Tay its opinion about anything. Ever. Please.