From: Rik van Riel (riel@surriel.com)
Date: Mon Aug 28 2006 - 23:16:06 MDT
Richard Loosemore wrote:
> It's Eliezer's list: you can hardly throw him into a kernelnewbies
> corner of his own list.
That's absolutely not the idea. I'd rather have Eliezer spend
time doing development, not explain the same idea over and over
again :)
The main reason I started kernelnewbies was that too many people
asked me the same questions over and over again. Putting them
together in one place and having them help each other made the
process more educational and more scalable.
> Do you seriously suggest that I know less than Eliezer about cognitive
> science?
That would be incredibly hard to judge for anyone, since nobody
appears to have created an artificial general intelligence yet.
If this were any other science (except perhaps string theory :)),
people would probably demand an experiment they can reproduce in
their own lab as support for a theory. This is a problem when
one lab has humans for psychology experiments, while the other
lab only has computers...
In this case we have cognitive science on one hand, with a lot
of valid experiments, many of which apply to humans. On the other
hand there are AI programmers, who wonder whether the cognitive
science experiments are relevant to their side of the field.
Worse still, even once the first AGI has been created, there's
no easy way to rule out whether there could be other approaches
to create intelligence.
Once we have a few AGI instances out there, and we can understand
how they work, maybe then I could find an objective answer to your
question. To be honest, it wouldn't surprise me if all our current
ideas about AGI were wrong, otherwise we might have one already.
-- What is important? What you want to be true, or what is true?
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:57 MDT