From: Matt Mahoney (matmahoney@yahoo.com)
Date: Sun Mar 01 2009 - 15:13:27 MST
--- On Sun, 3/1/09, Joshua Fox <joshua@joshuafox.com> wrote:
> > > Google can answer natural language questions up to about 8 words long.
> >
> I think that it can't answer most questions of 8 words. Here is one that
> could be done with narrow AI which Google doesn't try to answer directly: "Is
> Waunakee north or south of Madison, Wisconsin?" Even reading the pages at
> the first four results does not give the answer.
I said "most". I don't claim that Google is AI (yet). Try "How many teaspoons in a cubic light year?"
> >Maybe that's true for the 100 or so people worldwide who are making a
> serious effort to build an artificial human brain, what we
> call AGI.
>
> I'd guess that most AGI researchers do not say that AGI is an
> artificial _human_ brain, just a (potentially quite inhuman) artificial
> _intelligence_,: Something which optimizes towards achieving goals in
> complex environments with limited resources.
>
> > "Smarter" can either mean doing more of what humans can do (the Turing test)
> Likewise, I think that most researchers in AGI, including Turing in his
> time, don't see the Turing test as the true measure of intelligence, just a
> useful fall-back when no other good definition exists.
I suppose another useful test would be how much money you could make?
A surprising number of people trying to develop AGI haven't even thought about how they will test their systems. I suppose they just think they'll recognize intelligence when they see it. Without a test, you can claim that anything is intelligent.
I prefer prediction accuracy, as measured by compression ratio. It is quick, precise, and repeatable.
-- Matt Mahoney, matmahoney@yahoo.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:04 MDT