Re: >H RE: Present dangers to transhumanism

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Aug 31 1999 - 14:36:50 MDT


hal@finney.org wrote:
>
> Eliezer S. Yudkowsky, <sentience@pobox.com>, writes:
> > Frankly, I think a fairly large percentage of you *are* naive
> > technophiles. You think you can take controlled sips from a tidal wave.
>
> However, many us don't subscribe to the runaway AI/singularity scenario.

I was mostly referring to the nanoSanta tendency. I've changed
philosophies quite a bit over the last three years, but one thing that's
remained constant since the day I posted "Staring Into the Singularity"
is that I despise childhood fantasies of omnipotence; it shows a lack of
both imagination and maturity. Yes, nanotech is powerful enough to
build you a luxury Mercedes, or heal your hand if it gets run over by a
car. But it's way, way, way more powerful than that. Even leaving out
all IA, AI, and military implications, it would change the fabric of
society past all recognition. Accounts of twentieth-century American
culture unaltered by nanotech are as foolish as accounts of a
computer-assisted hunter-gatherer society using guided missiles to hunt
down deer.

But no, they don't think that. They think in Santa syllogisms. They
want a Mercedes. Nanotech is big and powerful. Nanotech will give them
a Mercedes. QED.

What *else* will it give them? Supercomputers easily capable of running
AIs? Uploading? Vingean headbands? A mass diaspora from Earth? They
don't know. They don't care. All they want is a Mercedes. What if I
try to point out the military implications? They don't want military
implications. Repeat the words "active shields". This is a magic cure.

> As Robin Hanson pointed out, we don't know how quickly the difficulty of
> increasing intelligence grows as you become smarter. Is it easier for
> an IQ of 400 to go to an IQ of 800 than for an IQ of 200 to go to 400?
> We have no evidence one way or the other. It is uncharted territory.

Those two statements are entirely unequivalent. Yes, it's uncharted
territory. That doesn't mean we have no grounds for believing some
statements over others. I don't see any grounds for believing in a
"difficulty" that will prevent a nanocomputer with a million times the
raw computing power of a human from being at least as much smarter than
humans as humans are from chimpanzees, or in a difficulty that prevents
an AI running over the Internet from being intelligent enough to use
rapid infrastructure to recompile the planet's mass and upload the
population. Whether difficulties occur after that is something of a
moot point, don't you think?

--
            sentience@pobox.com          Eliezer S. Yudkowsky
         http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:58 MST