Re: fluffy funny or hungry beaver?

From: Hal Finney (hal@finney.org)
Date: Sun Jun 09 2002 - 12:56:35 MDT


Eugene writes:

> While I'm rather open-minded on the issue, you'll find that the rest of
> the world has a low tolerance for people engaged in dangerous activities.
> The only reason we don't have Butler's Jihad on our hands right now is
> that the general public nor the establishment considers the risks to be
> real. This is bound to change.

One problem in your discussion with Eliezer is that you both seem to
accept that progress towards AI is likely to be significant within the
relatively near future. But in fact there is little evidence that AI is
on a successful path. The recent spasm of publicity about the massively
failing Cyc project just reminds us how far we are from a proven strategy
which can lead to successful human-level AI, let alone super-human.

No doubt Eugene is right public attitudes towards the threat of super-AI
are "bound to change", since presumably AI cannot stay out of reach
forever. But we may face many more severe challenges from nanotech,
biotech, and brain enhancement long before AI becomes a threat.

I know that Eliezer feels that he has a blueprint for a path to successful
AI within possibly just a few years, but AFAIK this plan has not been
endorsed by mainstream AI researchers. I am curious to hear how Eugene
views the actual prospects for human or super-human AI, in terms of the
time frame. Do you think AI will come first, or will we have to deal
with nanotech before AI?

Hal



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:41 MST