Tuesday, October 14, 2008

General Results versus Illuminating Examples

I recently came across this description of economists attributed by Paul Krugman to Robert Solow. "There are two kinds of economists: those who look for general results and those who look for illuminating examples." This dichotomy struck me as rather interesting and upon reflection, it was pretty clear to me that my own AI-research methodology is overwhelmingly towards the general results side. I am drawn to fairly basic and general questions and issues. Very rarely am I motivated by a specific example. Note that despite the superficial similarity between the dichotomy under discussion and the "theory versus empirical" dichotomy, they are quite unrelated. One can take a specific example, e.g., a specific human capability or indeed a specific human failing and then either build a theory of intelligence or do empirical work from that. Linguists do that. Cognitive scientists do that. Psychologists do that. Is AI research by its nature restricted to the general results side of the divide? Maybe this is the divide that separates AI from say cognitive science. The former seeks general results while the latter explains illuminating examples.

To all of the 3 people that might read this: Are there examples of AI research that stemmed from illuminating examples? What role do illuminating examples play in AI?

6 comments:

Yael Niv said...

Thats interesing... as an outsider to AI, I sometimes think it is motivated from the opposite end. Isn't chess (or backgammon) an illuminating example? It is certainly not a general result, as we don't solve most problems in life as we do a game of chess..

Satinder Singh said...

Yael, chess is indeed an excellent way to continue this discussion forward.

Is chess an illuminating example? Or is it more like an application domain that was held out to be a threshold-test for achieving interesting AI? I think chess was mostly the latter because the principles of search (a major contribution of AI to computer science) were developed as a general result and then later specialized to and enhanced for chess.

At the same time, there are probably folks whose AI research was indeed motivated by chess as an illuminating example, i.e., they were driven by that one example in the kinds of methods and ideas they explored.

So perhaps chess is a bit of both.

These days Poker and Go play the role chess used to. Perhaps we should ask those involved in it if they view these as illuminating examples or as application domains to demonstrate their general results. [I will ask Michael Bowling on Poker and David Silver for Go and see if they will respond here].

Anonymous said...

Very interesting...

When I first read Satinder's post, I thought I might fall into the "illuminating examples" camp. When I began working on poker, it wasn't with any techniques in mind. I was ready to do whatever it took to make progress and thought that something significant would come from that. In fact, what has come out of the last few years is entirely general (solving arbitrary large extensive games), and that is what excites me about this line of research. Contrast this with RoboCup where, in retrospect 90+ percent of my work over 5 years was specific to robot soccer (maybe "illuminating" but certainly not "general"), or worse... specific to a league of robot soccer. One could say that modern research in poker is just evaluating current game theoretic advancements in a challenging domain, but that ignores the fact that the current game theoretic advancements happened because of modern poker research.

But maybe this is what could be meant by "illuminating examples"... They are examples that illuminate general issues. Or maybe it's a third way. I think a number of very general results were motivated or illuminated by specific examples. I was surprised when I heard Csaba say that he developed UCT (the technique that has revolutionized game-playing in Go) with Go in mind. Csaba seems as much a hardcore generalist as Satinder and UCT is indeed general, but a specific example pushed things forward.

I do think Satinder's right that it has nothing to do with the theory/practice distinction. The work in poker has certainly motivated both theory and practice, both in specific and general ways.

Satinder Singh said...

Michael, thanks for responding. It is very interesting to hear how you think of your Poker research. [Aside: In a sense it is too bad that there is no place to discuss research strategies and methodologies of various researchers; it would be invaluable to students.]

The discussion here made me think harder about what might really be meant by an illuminating example. Perhaps it does not even make sense to consider poker or any game as an illuminating example. What could be an illuminating example would be an observation or statistic like human players playing poker demonstrate a specific kind of irrational bias. Then a researcher could use that as the basis for a computational idea for building an AI that plays poker. Or we could take human performance in the Iowa gambling tasks and build an AI from our understanding of that performance.
Does that sound more like Cognitive Science than AI?

In any case, Michael's response makes it clear that the dichotomy is not strict and that individual researchers move back and forth between the two as their research progresses. And as Michael said, the best kind of illuminating examples probably illuminate some general results.

Anonymous said...

I can give a nice example of an 'illuminating example' from the field of robot control.
Marc Raibert did some pioneering work on fairly minimalist robotic machines, which are described in a book entitled Legged Robots that Balance. Of course, these robots represent way more effort than the economists' examples or physicists' thought experiments but they represented a powerful argument in favor of theoretical and algorithmic methods that could leverage the natural dynamics of systems being controlled.
Subsequent to this, people like Dan Koditschek provided a formal theoretical account for why these robots work as well as they do, and people in machine learning and empirical robotics picked up on the ideas and developed it a lot further.

Anonymous said...

Although I am a believer in "big AI", sometimes its generality can be overwhelming: too many possible domains, with too many different attributes. Focusing on a single domain (be it chess or robotics) provides a way to narrow our attention to concrete attributes that really matter in at least one real application.

However, there is always a delicate balance between "illuminating example" and "special-case engineering". If we care too much about performance in the example application, we find ourselves building solutions to domain-specific attributes that will never generalise (like Mike's RoboCup example). This tempting trap probably exists for any real-life domain - I've certainly slipped into it myself at times with Computer Go.

So perhaps an example has the potential to be "illuminating" if a) it simplifies the AI problem by focusing on specific attributes, b) those same attributes are likely to occur in many other domains, and c) the example has sufficient variety and depth that it provides a microcosm of general AI. Luckily, I think we have a plentiful supply of suitable applications!

Finally, even if an example isn't illuminating, it can still provide an important demonstration of the power of a technique. I wonder where TD learning would be now if not for TD-Gammon?