From: Hal Finney (hal@rain.org)
Date: Mon Jun 09 1997 - 12:15:31 MDT
Robin Hanson, hanson@hss.caltech.edu, writes:
> OK, given all this intelligent comment, I can reframe the puzzle. Why
> is it that it seems to most people more promising to build up smart
> cooperative agents from scratch, as in an AI approach, than to
> domesticate existing very smart but not cooperative-enough agents? Is
> domestication really that hard compared to learning how to organize
> intelligence and acquiring all that common sense knowledge?
I would think that Robin, as an AI researcher himself, would have the
most insight into the motivations of that community. My guess would be
that AI would not be very interesting if we thought that the best we
were ever going to do was to make something as smart as monkeys. Sure,
achieving monkey-level intelligence and abilities would be an amazing
accomplishment by today's standards, but nobody expects progress to
end there. The real hope is to create human or super-human intelligence.
Possibly if enough time goes by without any progress in AI, attitudes
will change. If AI is eventually seen as impossible then these other
approaches may become more popular.
Hal
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:28 MST