Cognitive Computing: Finding a Definition

WHAT IS COGNITIVE COMPUTING?

Cognitive computing as a term was conceived by IBM to describe its well-known supercomputing platform, Watson.

But before I start defining what cognitive computing (CC) is, let me get out of the way what it isn’t.

A 2014 Wall Street Journal article about the emerging buzzword demonstrates the general misunderstanding of what such branded technology proposes it can achieve:

The newest umbrella term is “cognitive computing,” suggesting that we are finally developing computers that can mimic the human brain. The only problem with this term comes when you have an extended conversation with a neuroscientist, and you realize that we still know very little about how the brain works…

Mr. Davenport, the author of the above article, confuses the promise, versus the present-day premise, of cognitive computing.

Jen-Hsun Huang, CEO of NVIDIA, who just completed a $2 billion R&D project for a GPU intended for self-driving cars, had this to say to the MIT Technology Review:

We’re trying to build a better plane rather than figure out how a bird works. Some people describe it as neurons, but the analogy to the brain is very loose. To us, it’s a whole bunch of mathematics that extracts the important features out of images or voice or sensor action. Any analogy to a brain is not necessarily that important.

But to rewind to Mr. Davenport’s complaint, many scientists are confident that with advanced brain-mapping technology making substantial gains almost daily, and CPUs and GPUs becoming more robust even as they shrink in size, the day seems nigh that a computer actually will resemble a brain.

The terms in use are vague and often overlapping, if not synonymous, but we could call neuromorphic computing (NC) a sister sector to cognitive computing. NC is explicitly concerned with mimicking not only human cognition but the physical structure of the brain, in fine detail, which it’s making great strides in doing.

The most recent news in the advances of extreme detail in brain mapping is staggering.

Clay Reid of the Allen Institute was interviewed by Scientific American on the publication of his team’s successful hyper-detailed mapping of a “450-by-450-by-150 micrometer chunk of visual cortex” from a mouse (990 neurons–the most ever mapped–compared to a human brain’s 86 billion), part of its MindScope plan, which seeks to learn how the visual system in a mouse works.

“One has always had the dream of reconstructing a cubic millimeter, sort of a complete functional unit, and there are a billion connections in that,” says Reid. “We’re never going to get to that with people [doing the drawing].” That’s why computer scientists are working to develop auto-tracing algorithms and even AIs that can read the pictures.

With those, neuroscientists will be able to map 10,000 or 100,000 or even more connections in the brain—and understand their emergent properties all the better.

Notice the implicit use of an AI not modeled after the brain–because we don’t yet understand the brain. And notice how he points out how the human limitations of the project will come to rely on smarter computers for the heavy lifting. 

The computers are making significant progress too, which should do wonders in aiding scientists climbing up the hill of mounting data and complexity.

The Lawrence Livermore National Laboratory just invested in a $1 million IBM super-processor called TrueNorth that “uses a distributed and parallel approach to process information, similar to the way brains constantly handle a tumble of sensory information. Running about 50 times faster than today’s most advanced systems, it should excel at deep learning, a form of artificial intelligence.” (Source: SFGate)

To be fair, there’s always undue excitement and bloated claims to discern. Back in 2009, TechNewsWorld reported:

Scientists at IBM Research…have actually performed the first near real-time cortical simulation of the brain that exceeds the scale of a cat cortex and contains 1 billion spiking neurons and 10 trillion individual learning synapses.

A claim that was later debunked by Henry Markram, head of the Human Brain Project:

Their so-called “neurons” are the tiniest of points you can imagine, a microscopic dot. Over 98 percent of the volume of a neuron is branched (like a tree). They just cut off all the branches and roots and took a point in the middle of the trunk to represent an entire neuron. In real life, each segment of the branches of a neuron contains dozens of ion channels that powerfully controls the information processing in a neuron. They have none of that.

SO WHAT IS COGNITIVE COMPUTING SUPPOSED TO DO?

Cognitive Computing aims to solve or aid in solving problems that encompass enormous amounts of information and discernment. The ultimate hope is to build a computer armed with perception and understanding that is refined and expanded with every interaction.

The most clearly foreseeable areas of application are sectors that require more research and knowledge than is humanly possible.

Such applications might include a platform that can critically analyze the entirety of American legal history to aid lawyers prepping for trial.

It might provide doctors with the most up-to-date and comprehensive research related to the conditions of a given patient while also assessing that patient’s medical history, enabling medical professionals to make the most accurate, well-informed decisions possible.

WHAT’S THE DIFFERENCE BETWEEN COGNITIVE COMPUTING AND ARTIFICIAL INTELLIGENCE?

Great question!

As Rob High, IBM Fellow, Vice President and Chief Technology Officer, IBM Watson, is reported to have said at the GPU Technology Conference keynote:

…while they aren’t much different conceptually, the goal of the Watson team is far more about making humans better at what they do than recreating the human brain in machine form.

Lifehacker also quoted High on a separate occasion where he gives his definition:

Cognitive systems are able to learn their behavior through education. They support forms of expression that are more natural for human interaction [which] allows them to interpret data regardless of how it is communicated. Their primary value is their expertise and the ability to continuously evolve as they experience new information, new scenarios and new responses — all at enormous scale.

Both use machine learning technology with the intent of creating a platform to rationally assess and act on data, patterns, and situations. But is the promise of AI, as inherent in the name “artificial intelligence,” broader by implication to encompass independence?

According to High, the aim of CC is to achieve a system of collaboration with humans, whereas, one might assume, the dream of AI envisions an autonomous digital creature.

But the terms are confusing and often confused (if we’re assuming they aren’t synonymous).

As Dharmendra Modha, manager of CC at IBM Research – Almaden, told TechNewsWorld in 2009 (in the same article where he made his cat-brain claims, so muster a grain of salt here):

Typically, in AI, one creates an algorithm to solve a particular problem… Cognitive computing seeks a universal algorithm for the brain. This algorithm would be able to solve a vast array of problems.

So who’s right? Perhaps IBM’s aims have pivoted in the last seven years?

In 2014, researchers Rajeev Ronanki and David Steier told Wired that, “the computer system assists or replaces humans who possess specific analytical skills in order to help executives make better-informed decisions.”

In this science fiction world, we’re barreling toward, it sounds like everyone’s right!

SO…WAIT. WHAT’S COGNITIVE COMPUTING?

Cognitive computing systems are defined by specific abilities. They must be able to:

  • Understand and adapt to dynamic information and objectives while managing ambiguity and uncertainty.
  • Interact with users to define needs and with other devices and people to address those needs.
  • Iterate processes and interactions and respond with relevant information.
  • Learn from every previous iteration to refine processes.
  • Identify and adapt to contextual information, structured or unstructured, and drawn from multiple sources.

But what more or fewer capabilities would a sophisticated AI possess over an equally sophisticated cognitive computer?

“What’s the diff?” The distinction is semantic. Ultimately, the term cognitive computing simply helps to manage expectations in the face of overeager futurists and sci-fi saturation.

If the eventual reality–current estimates range between 20 and 50 years away–is for a hyper-intelligent, -creative, -empathetic machine, what’s missing? I finally turned to the dictionary for answers.

The American-Heritage Dictionary definition for “artificial intelligence” reads:

The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

That sounds a lot like cognitive computing.

But, digging further, down to the fourth and final definition on Wiktionary, science fiction offers an answer with a new facet not mentioned above: self awareness, which, turning back to the American-Heritage is:

[C]onscious knowledge of one’s own character, feelings, motives, and desires.

Perhaps forms of empathy and creativity are on the distant horizon, but what about self awareness? Why would programmers even worry about that? Maybe boredom after all their jobs are taken by the very thing they invented?

And are sci-fi writers to be trusted for definitions over computer scientists? Maybe?

The Times of India reported in February of 2016, in a piece titled “Why Tech Companies are Hiring Sci-Fi Writers”:

At Oculus, a virtual reality company, a copy of the popular sci-fi novel ‘Ready Player One’ is handed out to new hires. Magic Leap, a secretive augmented reality startup, has hired science fiction and fantasy writers…

“Like many other people working in the tech space, I’m not a creative person,” said Palmer Luckey, 23, a co-founder of Oculus, which was bought by Facebook for $2 billion in 2014. “It’s nice that science fiction exists because these are really creative people figuring out what the ultimate use of any technology might be. They come up with a lot of incredible ideas.”

NB: The popular Blackwell Philosophy and Pop Culture Series includes a book called Terminator and Philosophy: I’ll Be Back, Therefore I Am. Spooky.

Is this what distinguishes AI from CC, or is it just an application to the new term that hasn’t come up yet? Is the Terminator actually a cognitive computer!?

The Grand Conclusion!

Q. What is cognitive computing?

A. What isn’t it!?

cognitive computing

Over the next six weeks, we’ll be publishing an overview series detailing cognitive computing, its challenges, its areas of application, and its future. This post is the first in the series. Stay tuned for more!