Sunday, June 24, 2012

A great talk on "Are Humans Just another Primate"

Every so often I have discussions with my AI colleagues about whether the best research approach to building human-level intelligence is to build machines that have the linguistic abilities of humans or perhaps the high-level problem-solving skills of humans or whether it is to builds machines that can navigate, physically manipulate their world, and deal with other machines at a basic social level. More simply stated, whether in the current state of AI the more effective research approach would be "building a human" or "building a dog or a chimp". Those on the side of "building a human" often point to the non-incremental qualitative differences between humans and non-humans, whilst those on the side of "building a chimp" think that there is mostly a difference of degree.

I came across this wonderfully delivered and informative talk by Robert Sapolsky that I recommend listening to (including the questions at the end).  To whet your appetite, until recently it was thought that "a theory of mind" distinguishes humans from non-humans. Apparently it is not the case.

Friday, September 10, 2010

To "know everything"

Eric Schmidt, CEO of Google, describes the circumstance when "all the information in the world is available at our fingertips" (presumably through always networked mobile devices, etc.) as being a circumstance when "we can literally know everything".

It is interesting that common parlance is shifting to call looking something up on the internets as knowing that something. At one level this is innocuous in that the meaning of a word or phrase is changing. But at another level this is a huge shift in culture and society.

If you want to hear Eric say the statement above, here is the link (around 11 minutes into the video).

Sunday, July 12, 2009

Data on Scientists and what they and the public think of them?

An interesting post from the Pew Research Center for the People & Press. A must read for academics and researchers. Lots of poll results and some commentary.

Sunday, March 22, 2009

Is University Science somehow more Pure than Industry Science?

In this Washington post article the writer (a former stem cell researcher at Harvard) argues that the view that University research/science is "curiosity-driven" is misplaced and that the incentive structure is broken. I agree!

Here are two relevant quotes from his argument that I find real.

"University researchers are in a constant battle for recognition and the rewards associated with success: research space, speaking engagements, funding and autonomy. Consequently, while academic research is often described as "curiosity-driven," the reality is messier, as (curiously) many researchers tend to pursue the trendiest technologies and explore topics that happen to be associated with the most generous levels of research support.

Moreover, since academic success is determined almost exclusively by the number and prestige of research publications, the incentives to generate results are exceedingly powerful and can encourage investigators to see patterns that may not exist, to disregard contradictory observations that might be important, to overvalue data that might be preliminary or unreliable, and to embrace conclusions that deserve to be viewed with far greater skepticism."

Monday, November 24, 2008

On Academic Freedom

A New York Times opinion writer, Stanley Fish, often writes about questions of interest to university professors. In this article, he writes about the notion of "academic freedom" and what it means and how it is quite different from individual First Amendment rights. He also refers to a new book on this topic. Much of his discussion makes sense to me. In particular, the notion that "... academic freedom, rather than being a philosophical or moral imperative, is a piece of policy that makes practical sense in the context of the specific task academics are charged to perform." He argues that academics ought to be protected from the dictates of public opinion but ought to be subject to professional standards and norms. I agree. And this is a distinction important for us academics to understand and implement in our conduct.

Tuesday, October 21, 2008

On Reviewing Reinforcement Learning Papers

*I had drafted this earlier this summer but never got around to finishing and posting this. Am doing so now*

I just finished reviewing for a conference and in looking over my reviews was reminded of the following comment by a senior colleague to me at ICML this year. He has been program chair for several conferences and said that he thought that RL reviewers were the hardest on their own community that he had ever seen. I thought back to my turns as senior PC or area chair at ML conferences and realized that I had felt the same way in those cases.

So, assuming you agree with the supposition, why is it the case? In looking back at years of reviewing, I think the reasons are:

1. There is a large subset of the RL community that is at best skeptical of papers that don't do "real applications". Certainly there is good reason to be hard on shoddy empirical work or claims. Nevertheless this subset goes too far perhaps?

2. Assuming that a reviewer is willing to accept simulated domains, in the absence of widely accepted benchmarks there is no agreement to a standard suite of problems and so different reviewers set their own standards of acceptable empirical test sets. Many reviewers reject "gridworld"-types of tasks for example.

3. There is also a healthy skepticism of theoretical results in a section of the RL community. And indeed, some theory while quite possibly true has little significance. But again, perhaps these papers are treated too harshly?

4. Perhaps the most important reason however is that a significant part of RL research is focused on issues other that strict performance. Arguably, the focus of machine learning on benchmarks and performance is misplaced for RL? This is to say that while some part of the RL community needs to focus on engineering and performance, others should be allowed and even encouraged to explore the more difficult to quantify AI issues.

Of course it is entirely possible that we are not producing enough good papers as a community and thus being harsh is the right thing. I don't believe this :)

Any comments?

(In a later post, I will make specific suggestions to address the issues above)