*I had drafted this earlier this summer but never got around to finishing and posting this. Am doing so now*
I just finished reviewing for a conference and in looking over my reviews was reminded of the following comment by a senior colleague to me at ICML this year. He has been program chair for several conferences and said that he thought that RL reviewers were the hardest on their own community that he had ever seen. I thought back to my turns as senior PC or area chair at ML conferences and realized that I had felt the same way in those cases.
So, assuming you agree with the supposition, why is it the case? In looking back at years of reviewing, I think the reasons are:
1. There is a large subset of the RL community that is at best skeptical of papers that don't do "real applications". Certainly there is good reason to be hard on shoddy empirical work or claims. Nevertheless this subset goes too far perhaps?
2. Assuming that a reviewer is willing to accept simulated domains, in the absence of widely accepted benchmarks there is no agreement to a standard suite of problems and so different reviewers set their own standards of acceptable empirical test sets. Many reviewers reject "gridworld"-types of tasks for example.
3. There is also a healthy skepticism of theoretical results in a section of the RL community. And indeed, some theory while quite possibly true has little significance. But again, perhaps these papers are treated too harshly?
4. Perhaps the most important reason however is that a significant part of RL research is focused on issues other that strict performance. Arguably, the focus of machine learning on benchmarks and performance is misplaced for RL? This is to say that while some part of the RL community needs to focus on engineering and performance, others should be allowed and even encouraged to explore the more difficult to quantify AI issues.
Of course it is entirely possible that we are not producing enough good papers as a community and thus being harsh is the right thing. I don't believe this :)
Any comments?
(In a later post, I will make specific suggestions to address the issues above)
Tuesday, October 21, 2008
Tuesday, October 14, 2008
General Results versus Illuminating Examples
I recently came across this description of economists attributed by Paul Krugman to Robert Solow. "There are two kinds of economists: those who look for general results and those who look for illuminating examples." This dichotomy struck me as rather interesting and upon reflection, it was pretty clear to me that my own AI-research methodology is overwhelmingly towards the general results side. I am drawn to fairly basic and general questions and issues. Very rarely am I motivated by a specific example. Note that despite the superficial similarity between the dichotomy under discussion and the "theory versus empirical" dichotomy, they are quite unrelated. One can take a specific example, e.g., a specific human capability or indeed a specific human failing and then either build a theory of intelligence or do empirical work from that. Linguists do that. Cognitive scientists do that. Psychologists do that. Is AI research by its nature restricted to the general results side of the divide? Maybe this is the divide that separates AI from say cognitive science. The former seeks general results while the latter explains illuminating examples.
To all of the 3 people that might read this: Are there examples of AI research that stemmed from illuminating examples? What role do illuminating examples play in AI?
To all of the 3 people that might read this: Are there examples of AI research that stemmed from illuminating examples? What role do illuminating examples play in AI?
Monday, October 13, 2008
Krugman and Foundation
On a non-RL note: Paul Krugman, one of the writers whose nytimes blog I have enjoyed reading over the past few months (and incidentally the one who just got a Nobel Memorial Prize in Economics) credits the reading of the Foundation series of books by Isaac Asimov for sparking his interest in economics. His notion being that short of inventing the field of psychohistory, economics is the field currently available that comes closest to explaining and understanding and perhaps predicting the macro outcomes of the micro actions of billions of individuals. How true! And, how cool! Makes me want to learn more economics.
Subscribe to:
Posts (Atom)