Welcome Watson: IBM’s trivia machine maker comes to Tech

Over the course of two nights the week of Feb. 14, mankind may have met the first of its robotic overlords. While it is too early to worry about time-travelling assassin bots or robotic armies harvesting humanity for energy, it would appear that IBM’s Watson has brought mankind’s dominance of the world of television game shows to an end.

After Watson’s victory over Jeopardy! All-Stars Ken Jennings and Brad Rutter on Wednesday, Feb. 16, Bill Murdock, a research team member at the Watson Research Center at IBM, gave a talk on IBM’s newest computational juggernaut on Tech’s campus the following day.

Murdock, Ph.D. CS ‘01, worked as a member of Ashok Goel’s Design and Intelligence Laboratory while at Tech.

Despite the high turnout, the talk was aimed at a specialized crowd, as much of the lecture focused on topics that would be unfamiliar to those without a background in machine learning and intelligence. However, the talk was at a high enough level that while an average Tech student would not understand everything Murdock discussed, they would still learn a great deal about how Watson worked.

One of the most important things to realize about Watson is that it is not simply a massive database. While it does have massive stores of knowledge (to the tune of 15 terabytes of RAM), the real power behind Watson lies in its ability to understand the question it is being asked, then using learning algorithms to evaluate what it knows and choose the best answer.

One of the biggest topics of discussion was, of course, how Watson chose these answers. While answering a Jeopardy! question requires dozens of complex algorithms and a massive bank of computers, it is a fairly straightforward process.

First, Watson breaks down the question into one or more possible interpretations. For each interpretation, it then generates a list of possible answers. For each answer in the list, it searches its databanks for evidence supporting or refuting that answer, and grades how likely that answer is. After this, Watson merges the hypotheses together and chooses the answer it believes most likely to be correct.

Watson’s power really becomes apparent here, as a single question can result in several interpretations, each of which has hundreds of possible answers. In turn, each answer is supported or refuted by thousands of pieces of evidence, all of which can be interpreted hundreds of thousands of ways.

The fact that Watson can find and process this amount of information in about three seconds shows how far computing has come since IBM’s other famous super computer, Deep Blue, defeated world champion Gary Kasparov in chess in 1997.

Here, though, Murdock believes that comparing Deep Blue and Watson is like comparing apples and oranges. While Watson obviously could not work its magic without a hefty supply of hardware, the real stars of the show are the learning, search and language-processing algorithms that Watson makes use of.

While Deep Blue represented what could be done with enough computing power, Watson represents what can be done by using that power in conjunction with the newest, most powerful algorithms in learning.

According to Murdock, one of the biggest advances Watson represents is the ability to handle ambiguity. Again, the comparison to Deep Blue came up, this time in terms of how their challenges were different.

“Real language is real hard…In chess, you had a finite, well-defined search space [with] explicit, unambiguous mathematical rules, but that’s not the case here,” Murdock said.

Murdock described how, compared to chess, natural language processing is an incredibly difficult computing problem, due (among other things) to the ambiguity inherent in human language.

As an example, Murdock gave two sentences, both of which contained the same fact: that someone named Jack Welch was once the head of G.E. However, each sentence reveals that knowledge differently.

While the information in this sentence can be stated simply as “Jack Welch ran G.E.,” the same information could be contained in a much more complex statement, such as, “If leadership is an art, then Jack Welch proved himself a master painter during his tenure at G.E.”

While a person could easily extract the information from either sentence, writing a computer program that can extract it from sentences like the second is a challenge that was only solved recently.

An overarching theme of the talk was how Watson struggled with concepts a human would find easy but excelled in other areas. Murdock discussed several incorrect answers Watson gave and the reasoning behind why Watson gave them.

For example, one of the questions on the show asked about a physical oddity of George Eyser, a gymnast in the 1904 Summer Olympics.

While most humans would have trouble quickly pulling together what information (if any) they knew about the topic at hand, according to Murdock, the logs showed that Watson almost immediately found a passage that said, “George Eyser’s left leg was made of wood.”

However, Watson was unable to understand what about this was an “oddity,” and, as a result, and chose “leg” as its answer, rather than the correct answer, “wooden leg.”

As for where Watson will go next, it’s no secret that IBM wants to see a Watson-like system put into place somewhere in the medical industry.

With the large number of variables that can go into diagnosing a disease, and the fact that it is impossible for a person to accurately keep track of the enormous base of medical knowledge available, Watson’s engineers feel that medicine is a field that Watson could flourish in.

Advertising

Comments are closed.