Sign in to comment!

Menu
Home

Technology

IBM's Watson Computer Wallops 'Jeopardy!' Champs in Trial Run

Man vs Machine ibm takes on Jeopardy

"Jeopardy!" champions Ken Jennings, left, and Brad Rutter, right, look on as an IBM computer called "Watson" beats them to the buzzer to answer a question during a practice round of the "Jeopardy!" quiz show in Yorktown Heights, N.Y., Thursday, Jan. 13, 2011. (AP Photo/Seth Wenig)

Our robot overlord isn’t named  HAL or SkyNET -- it’s Watson.

After four years of development, IBM on Thursday publicly unveiled a computing system that specializes in analyzing natural human language and answering complex questions. In other words, it’s really good at Jeopardy!.

To test its acumen, the machine was pitted in an exhibition match against the most celebrated human contestants, Ken Jennings and Brad Rutter, to stunning effect. Watson quickly cleared out the entire first category without the humans getting even a buzz in.

Named after IBM founder Thomas J. Watson, the project is a defining breakthrough in artificial intelligence technology, said John E. Kelly III, senior vice president and director of IBM Research. 

“We’ve created a system that can interact in a very special way,” he said at the demonstration. “People spend their lifetime trying to advance [the field of artificial intelligence] by inches. What Watson does has demonstrated the ability to advance the field of artificial intelligence by miles.”

Today’s practice run was just a glimpse of what's to come, in other words. IBM announced the contest in December. The official first-ever man vs. machine Jeopardy competition will take place Friday, and will air on February 14, 15, and 16.

It’s not the first time IBM has used its technological know-how to pit people against computers. The announcement of Watson brings back memories of Deep Blue, the supercomputer that, after a few initial setbacks, went on to defeat world chess champ Gary Kasparov. The similarities end there. 

Chess is a game tailor-made for the logical ones and zeros of artificial intelligence, with its perfect parameters and finite though tremendously large number of possible outcomes. Jeopardy! is a wholly different beast because of the human factor -- and as such, there is inherent uncertainty.

“When we deal with language, things are very different,” said David Ferrucci, principal investigator of Watson's DeepQA technology at IBM Research. “Language is ambiguous, it’s contextual, its implicit. Words are grounded really only in human cognition -- and there’s seemingly an infinite number of ways the same meaning can be expressed in language. It’s an incredibly difficult problem for computers.”

Coding that “human-ness” has been the primary work of 25 of IBM’s top research scientists and they’ve accomplished it with a mish-mash of algorithms and raw computing technology. Watson is powered by 10 racks of IBM Power 750 servers with 2,880 processor cores and 15 terabytes of RAM; it's capable of operating at a galloping 80 teraflops.

With that sort of computing power, Watson is able to quickly analyze natural human language, scour its roughly 200 million pages of stored content -- about 1 million books worth -- and find an answer with confidence in as little as 3 seconds.

“What Jeopardy! does for us is it gave us this compelling and notable way to drive and measure that technology along the key dimensions,” Ferrucci said.

It is that human factor that makes Watson so innovative yet in a way also somewhat unsettling, perfectly echoed by the vignette of an empty space between the two contestants on stage. At one point, upon finishing the last question in the category “Girls Dig Me,” Watson even made a joke -- causing the audience to erupt in laughter and applause. 

It seems IBM's machine was not only smart, it was funny.

With this milestone passed, fears of a machine-led takeover seem premature, but the topic was certainly on the minds of contestants. Jennings put it best, saying he was probably less John Henry and more John Connor.

“[HAL 9000] is science fiction,” Ferucci admitted. “I don’t think we’re anywhere near that or going in that direction. One inspiration for this kind of technology from science fiction, at least for me, is the computer on Star Trek. They built a system that helps you with your information needs, understands your question, organizes it, and presents it in a way that you can digest it quickly.”

Though Watson ended the exhibition in the lead with $4,400 compared to Jenning’s $3,400 and Rutter’s $1,200, a continuation of that battle shown on internal televisions during lunch revealed that Jenning had pulled ahead after scoring a Daily Double. Watson still isn’t perfect, it seems. 

While she wouldn’t reveal specific numbers, one IBMer revealed that Watson has lost during recent practice runs to lesser opponents. Ken Jennings, of course, is on a completely different level.  The computer programmer shot to public prominence when he won a record 74 consecutive games, netting him more than $2.5 million in winnings.

And while Watson operates on calculated confidence, buzzing in if it knows to a certain degree the right answer, people -- Jennings in particular -- use intuition, often buzzing in before they even know the answer simply because they feel like they know it, giving him a concrete speed advantage.

Watson is constantly improving, of course. Kelly estimates an increase of 50% every 20 months it spends learning. Ferrucci was quick to remind the audience who was in charge. 

After all, it was humans who built Watson in the first place.