From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Nov 10 2007 - 13:21:36 MST
Robin Hanson wrote:
> I've been invited to write an article for an upcoming special issue of
> IEEE Spectrum on "Singularity", which in this context means rapid and
> large social change from human-level or higher artificial
> intelligence. I may be among the most enthusiastic authors in that
> issue, but even I am somewhat skeptical. Specifically, after ten years
> as an AI researcher, my inclination has been to see progress as very
> slow toward an explicitly-coded AI, and so to guess that the whole brain
> emulation approach would succeed first if, as it seems, that approach
> becomes feasible within the next century.
>
> But I want to try to make sure I've heard the best arguments on the
> other side, and my impression was that many people here expect more
> rapid AI progress. So I am here to ask: where are the best analyses
> arguing the case for rapid (non-emulation) AI progress? I am less
> interested in the arguments that convince you personally than arguments
> that can or should convince a wide academic audience.
All the replies on SL4 as of 10:40AM Pacific seem pretty good to me.
Why are you asking after "rapid" progress? It doesn't seem to be the
key question.
Kahneman's "Economic preferences or attitude expressions? An analysis
of dollar responses to public issues" makes the point that in many
cases, people have no anchors, no starting points, for questions like
"How much should this company be penalized for crime X?" and so they
substitute judgment of "How bad was this company, on a scale of 1 to
Y?", where the actual scale Y varies depending on the person, and then
tack "million dollars" onto the end.
On one memorable occasion, an AI researcher said to me that he thought
it would take 500 years before AGI.
500 years? 500 years ago we didn't even have *science*.
So what's going on? I suspect that, especially among AI researchers,
the question "How long will it be before we get AGI?", is more of an
attitude expression than a historical estimate - "On a scale of 1 to
Y, how hard is it to build AGI?" - where Y varies from person to
person, and then they tack on "years" at the end. Naturally, building
AGI will seem *very* hard if you can't imagine any way to do it (the
imaginability heuristic) and so they'll give a response near the upper
end of their scale. The one responded as if I had asked, "On a scale
of 1 to 500, how hard does building AGI *feel*?"
The key realization here is that building a flying machine would also
*feel* very hard if you did not know how to do it. But this reflects
a knowledge gap, rather than solid knowledge of specific
implementation difficulties. We know how stars work, therefore we
know it would be difficult to build a star from hydrogen atoms. Some
magazine or other, in 1903, said that future flying machines would be
built by the work of "millions of years"(!) of mathematicians and
mechanists. They didn't know how to do it, and they confused this
feeling of difficulty for the positive estimate that doing it *with*
knowledge would be very difficult.
As for knowledge itself, that is a matter of pure basic research, and
if we knew the outcome we wouldn't need to do the research. How can
you put a time estimate on blue-sky fundamental research delivering a
brilliant new insight? Far or near?
It's also possible that AI researchers are substituting judgment of
"How long would it take to create AGI *using the techniques you
know*?" in which case 500 years might well be an underestimate, if it
could be done at all, like trying to carve Mount Rushmore using
toothpicks.
Others may substitute judgment of "How good do you feel about AI?" and
give a short time estimate, reflecting their general feelings of
goodwill toward the field.
We have no reason to believe that timing is predictable even in
principle - that it will be a narrow distribution over Everett
branches - let alone that we can predict it in practice with knowledge
presently available to us.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:00 MDT