In My Words: More than just a game – the emergence of cognitive computing

Elon University scholar Pranab Das' past research on neural networks inspired a guest column published by regional newspapers in which he gives a preview of what humans can expect from computers in the coming years.

The following column appeared recently in the Winston-Salem Journal, the Fayetteville Observer and the (Burlington, N.C.) Times-News via the Elon University Writers Syndicate. Views are those of the author and not Elon University.

*****

Professor Pranab Das
More than just a game: The emergence of cognitive computing
By Pranab Das – daspra@elon.edu

I’m not very good at the Chinese board game Go, but it’s fun and I like telling people that it’s the only game a computer can’t win. As of this month, I’ll have to change my tune. Google’s AlphaGo beat a human Go master in a victory with implications far beyond that ancient game.

AlphaGo, like IBM’s Watson, is the cutting edge of a new class of smart machines poised to rapidly change the world economy. IBM trumpeted the coming revolution during the Super Bowl in ads with the ominous tagline “welcome to the cognitive era”. It’s a scary slogan because people have always been pretty cognitive (with the exception of most politicians). When machines get cognitive, they’ll be able to do all sorts of new tasks and people will lose one of our key advantages.

This so-called cognitive era depends on the emergence of a new kind of computing. Old-fashioned computers rely on code to follow elaborate rules. They excel at clear-cut problems and always do as they’re told. But they tend to fail when situations get too subtle, imprecise or variable.

AlphaGo and its ilk use ‘artificial neural networks’ loosely modeled on biological brains. They take input, which can be anything from letters on a page, the moves in a chess game, or even spoken words, and then they provide output. They might identify the letter they see, propose a countermove or answer a question. What makes neural networks special is what’s called their ‘hidden layers’, a collection of interlinked processors between input and output where unpredictable structures emerge.

We train neural networks by repeating a simple procedure. Give it some input and record whether it provides the right output. Then run some clever software called ‘learning algorithms’. Those algorithms automatically adjust the hidden processors’ interconnections to maximize success. Weirdly, programmers have no idea in advance how those adjustments will turn out. And the hidden layers are so vastly complicated that we can ever tell exactly what rewiring was called for.

All we know is that the network gets better with training until it reaches a plateau. When that happens, we can ‘freeze’ the network and set it to work on new problems.

Google trained AlphaGo on 30 million moves before it reached that point. By then it could predict human countermoves 57 percent of the time. That’s pretty good but Google didn’t stop there. They gave AlphaGo the power to train itself. Its programmers let it play games against itself and, each time, run learning algorithms to try to strengthen its game. We’ve seen the outcome and it’s impressive.

These machines get very good at responding to variable situations. They can read words on a slanted piece of paper or understand unusual accents. More importantly, they can also recognize useful information in a mass of documents or detect patterns in fuzzy pictures. And that’s why lawyers, radiologists and a lot of other professionals should worry.

Young lawyers often start their careers by plowing through boxes of memos and records looking for facts relevant to big cases. Until recently, humans were much better than computers at sniffing out the right stuff. No longer. Machines can now do it faster, much cheaper, and generally almost as well. Bye-bye six-figure starting salaries, kids. Similarly, lots of radiologists stare at grainy x-ray images and recognize broken bones but even experienced doctors often miss fractured ribs because they’re just so hard to see. Intelligent machines can do a better job. And they don’t play golf.

The implications are enormous. Any job that requires sorting, detecting patterns, or processing words or speech are fair game. Why hire security people to stare at TV monitors when a computer can catch a fleeting shadow much better? Why have pharmacists watch for drug interactions and count pills? For that matter, why use humans to edit op-ed pieces when a computer can detect snarky remarks almost as well?

Watson can translate your next novel or watch your warehouse for pilfering. But it can’t innovate, it can’t dream, and it can’t think about itself. Thankfully, we’re a long way from a computer that could set up its own training scheme, build its own learning algorithms, or decide what sort of problems to think about.

So Skynet isn’t taking over any time soon. But your job may be in jeopardy. We’re on the threshold of a workplace shakeup as sweeping as the industrial revolution or the information age. While humans get used to it, I guess we’ll have to play games with each other.

Pranab Das is a professor of physics at Elon University. He did research on neural networks in the 1990s and is currently studying the science and philosophy of emergence.

*****

Elon University faculty with an interest in sharing their expertise with wider audiences are encouraged to contact the Office of University Communications should they like assistance with prospective newspaper op/ed submissions.