Over the next three nights, Jeopardy’s two modern champs, Brad Rutter and Ken Jennings will be facing off against IBM’s newest player in their ongoing campaign to humiliate human opponents with machines, the rather harmlessly-named Watson.
While viewers at home will only see a black monitor speaking with a not-quite-War Games synthesizer in reality, Watson (which took some two dozen engineers over 4 years to build), is roughly the size of 10 refrigerators. Come to think of it, from Jennings’ and Rutter’s perspective, that thought will probably sting less than getting intellectually smacked around by something that looks like an overgrown goth iPad.
From a technological point of view, Watson represents a sizable stepping stone in natural language processing and heuristic semantic analysis, far more so a feat than the pure math attack that allowed IBM’s Deep Blue to defeat Chess Grandmaster Gary Kasparov in 1996*.
Looking beneath the processes Watson will use shows us that IBM’s real-success here won’t be changing how NLP works, but rather showing the sheer scale of processing power necessary to break down and answer questions in real time.
Watson answers questions by examining and breaking down the language of the question, and then comparing the words against a staggeringly massive database of reference material (all told some 200 million pages from novels, encyclopedias, etc.) in order to determine the most likely association. Once Watson has determined a set of possible results, it will choose its answer based on a confidence score. This is no different a process currently in use by many NLP and semantic algorithms. It’s actually the same basic process we use for Spiral16′s NLP engine, though at a much more scaled down (and economical) level.
Even with all this processing power and data at its disposal, Watson is not immune to flubbing the occasional question, especially given Jeopardy’s penchant for clever wordplay in the question. As impressive as Watson is from an engineering and programming perspective, software is incapable of actually understanding the meaning and associations of a word or sentence. Human beings, on the other hand, have experience which allow us to contextualize questions as they are asked, and it’s that same contextual understanding that helps us parse language concepts like sarcasm or puns, where a system like Watson ignores the meaning and simply looks for common word associations and similar sentence structure.
At the end of the day, it’s the oh-so-human element that informs our own model that no query, chart, or topic can stand on its own without a human being validating and contextualizing the data, and while we’re certainly looking forward to seeing how well Ken Jennings and Brad Rutter hold up against Watson and the limitless budget of IBM’s software engineering team, what we’ll learn about natural language processing as a maturing technology remains to be seen. The real question will be how well can Watson perform against a powerhouse like Sean Connery?
Further reading: NPR’s Morning Edition & Studio 360′s interview with tech-writer Clive Thompson, who visited with Watson’s programming team last year.
*Kasparov was so incensed at losing to Deep Blue that he insisted on a re-match, but IBM choose to dismantle Deep Blue and decline. Probably in fear that Kasparov would go full-on Bobby Fisher.