Every so often I have discussions with my AI colleagues about whether the best research approach to building human-level intelligence is to build machines that have the linguistic abilities of humans or perhaps the high-level problem-solving skills of humans or whether it is to builds machines that can navigate, physically manipulate their world, and deal with other machines at a basic social level. More simply stated, whether in the current state of AI the more effective research approach would be "building a human" or "building a dog or a chimp". Those on the side of "building a human" often point to the non-incremental qualitative differences between humans and non-humans, whilst those on the side of "building a chimp" think that there is mostly a difference of degree.
I came across this wonderfully delivered and informative talk by Robert Sapolsky that I recommend listening to (including the questions at the end). To whet your appetite, until recently it was thought that "a theory of mind" distinguishes humans from non-humans. Apparently it is not the case.
I came across this wonderfully delivered and informative talk by Robert Sapolsky that I recommend listening to (including the questions at the end). To whet your appetite, until recently it was thought that "a theory of mind" distinguishes humans from non-humans. Apparently it is not the case.
No comments:
Post a Comment