Thursday, May 29, 2008

AI in Second Life (post 1)

I have had a casual interest in Second Life (SL) for a little while and periodically I am going to collect links to and briefly discuss AI work being done in SL. Here is the first such discussion.

This page has a brief description of work by RPI on building an AI avatar that can do rudimentary conversations in SL. The description characterizes the avatar as a "four year old" based on a theory of mind test.

Briefly a version of a theory of mind test goes as follows. The agent sees person A put an object inside box 1 and then leaves the room. The agent then sees person B come inside the room and take the object from box 1 and place it inside a different box 2. The agent is then asked which box will person A look for the object in when she comes back. If the agent has a theory of mind it will say that person A will look in box 1, else it will say that person A will look in box 2.

Here is a video of the RPI agent Edd failing




and here is a link to a video of the agent Edd succeeding after having learned.

The RPI researchers convert the English conversation into logical expressions and then use theorem proving as the reasoning engine to provide behavior. The authors state, “Our aim is not to construct a computational theory that explains and predicts actual human behavior, but rather to build artificial agents made more interesting and useful by their ability to ascribe mental states to other agents, reason about such states, and have — as avatars — states that are correlates to those experienced by humans.”

I could not find a paper by the authors but did find the slides of a talk they gave at the First Conference on Artificial General Intelligence, 2008.

My comments: The specific theory of mind experiment described above seems relatively straightforward and I have seen others do similar things (e.g., a project from Cynthia Breazeal's lab; someone point us to the precise paper if they know it). I would venture that the natural language processing in the RPI project is rudimentary at this point and works only in fairly narrow scripted settings. In summary, it seems to me that the authors used their previous general purpose logic-based question-answering work to build a specific narrow system that can basically do this one task. Also, it isn't clear what SL added to this experiment since the conversation is not with an arbitrary human player but rather with the experiment designers. At the same time, the overall goal of ascribing mental states to other agents and to reason about such states seems laudatory, at least at first glance.

(I need to get Charles Isbell to comment on this.)

4 comments:

Michael Littman said...

Sadly, the video seems to have be deleted at YouTube.

Michael Littman said...

Never mind, it's back now. So, it looks like Micah got it wrong, eh? That's an odd demo.

Satinder Singh said...

Yes, Edd the agent got it wrong. (Micah is the other human in the experiment). He agent learns over time to get it right. There is another video in which the agent gets it right. I will update the post to include that link.

Satinder Singh said...

Here is the link to video of Edd the RPI agent succeeding at the theory of mind test.