On the surface, it has all the makings of a gimmick: pitting the greatest champion in the history of the TV game show Jeopardy! against a computer opponent, much like chess grandmaster Garry Kasparov took on supercomputer Deep Blue more than a decade ago.

But behind IBM’s latest media event is an audacious scientific goal: to bridge the gap between artificial and human intelligence.

The supercomputer Watson, named after the company’s founder, challenged former Jeopardy! champions Ken Jennings and Brad Rutter in a quick, three-category showdown in preparation for a full episode of Jeopardy! next month. The computer won the preview round by a slim $1,000 margin, but the four-year task of designing and perfecting the machine represents something far more important than a Jeopardy! victory.

Previous attempts at building computers that could beat human beings in games such as checkers and chess were successful in large part because those tasks could be represented mathematically. As such, more and more computing power – combined with some smart algorithms – could be relied on to produce better and better results.

But Jeopardy! isn’t just a numbers game – success demands a mastery of language, something computers aren’t designed to do.

“Natural language is such a different ballgame,” says Dave Ferrucci, the lead researcher on the Watson project. “Humans write things for other humans – there’s stuff that’s unspoken, only implied. There’s nuances; there’s almost an infinite number of ways you can say something.”

For IBM, perfecting a technology that can not only sift through but understand natural language presents a scientific and financial gold mine. For decades, one of the essential benchmarks of artificial intelligence has been the Turing test. The test essentially involves a human being having a blind, text-based conversation with two entities, a computer and another human being. If the person is unable to determine, based on the dialogue, which is which, the computer is said to have passed the Turing test – it has successfully mimicked the intelligence of a human. Computers can easily pass the Turing test if the human asks only mathematical questions, for example. But the test becomes much more confounding for the machine if the person engages in the same kind of conversation they’d have with, say, someone sitting beside them on a bus.

A computer that could pass that sort of test would revolutionize the way people and machines interact, bringing computer users closer to what has so far been squarely in the realm of science fiction: a machine that understands and accurately engages in normal human conversation.

IBM’s latest creation may see its most immediately profitable applications in the field of business analytics. Thanks to the digital revolution, large companies routinely collect far more data than they can effectively analyze – everything from customer information to buying habits to website metrics. Some of the biggest tech firms in the world are racing to develop solutions that can not only sort through the massive influx of information, but make the best possible business decisions based on that data. With Watson, IBM may have a tool that searches with the speed of a supercomputer, but thinks with something approximating a human mind.

“We can use computers to find documents with keywords, but the computers don’t know what those documents say,” Dr. Ferrucci says. “What if they did?”

To achieve that level of sophistication, researchers had to do more than simply pack as much processing power into a machine and unleash it on the game. Watson is an incredibly complex labyrinth of information and algorithms designed to do everything from grouping items by word-association to deciphering puns. Such tools are vital if Watson is to do well in Jeopardy! categories such as “Before and After,” where the correct response almost always includes an element of wordplay.

Like the human contestants, Watson is required to present its answer in the form of a question. While the computer’s hardware, which occupies the basement floor of the studio where the show will be filmed, is packed full of reference-book material and encyclopedias, it is not connected to the Internet. It faces the same time and wager constraints as its opponents.

In addition to finding the right answer, Watson must also determine how confident it is that the answer is correct to know whether to risk a response. Dr. Ferrucci says a simple sweeping search of the reference material, without using the complex algorithms, yields a correct result about one-third of the time, but it’s impossible to tell which third. After four years of tweaking, Watson is up to about 85 per cent accuracy on items it believes it has gotten right.