Re: IA vs. AI was: longevity vs singularity

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Jul 23 1999 - 14:36:42 MDT


paul@i2.to wrote:
>
> On Thu, 22 July 1999, "den Otter" wrote:
>
> > Eliezer seems to
> > favor the AI approach (create a seed superintelligence and
> > hope that it will be benevolent towards humans instead of
> > using our atoms for something more useful), which is
> > IMHO reckless to the point of being suicidal.

Funny, those are almost exactly the words I used to describe trying to
stop or slow down the Singularity.

> > A much better, though still far from ideal, way would be
> > to focus on human uploading, and when the technology
> > is operational upload everyone involved in the project
> > simultaneously. I'm very surprised that so many otherwise fiercely
> > individualistic/libertarian people are so eager to throw
> > themselves at the mercy of some machine. It doesn't
> > compute.

A suggestion bordering on the absurd. Uploading becomes possible at
2040 CRNS. It becomes available to the average person at 2060 CRNS.
Transhuman AI becomes possible at 2020 CRNS. Nanotechnology becomes
possible at 2015 CRNS.

If you can stop all war in the world and succeed in completely
eliminating drug use, then maybe I'll believe you when you assert that
you can stop nanowar for 45 years, prevent me from writing an AI for 40,
and stop dictators (or, for that matter, everyone on this list) from
uploading themselves for 20. Synchronized Singularity simply isn't feasible.

> Exactly! I've been promoting all along IA instead of AI.
> Yet you and I seem to be in the minority. What gives?
> This issue alone has caused me more alienation from the official
> extropian and transhumanist movements than
> anything else. As far as I'm concerned I have little
> use for Hans Moravec, Marvin Minsky and other
> AI afficiando's that most people here worship
> as heroes. Besides, Marvin Minsky in person
> can be the most contemptous person in the
> room. His behavior at Extro 3 only pushes
> my point home that many of the AI researchers
> do *not* have our best interest in mind.

Be sure to include me in that list; after all, I've openly declared that
my first allegiance is not to humanity.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:33 MST