RE: Singularity: AI Morality

From: Billy Brown (bbrown@conemsco.com)
Date: Mon Dec 07 1998 - 15:19:54 MST


Eliezer S. Yudkowsky wrote:
> Primarily to beat the nanotechnologists. If nuclear war doesn't get us in
> Y2K, grey goo will get us a few years later. (I can't even try to slow it
> down... that just puts it in Saddam Hussein's hands instead of Zyvex's.)

To be honest, I have never found the gray goo scenario to be plausible.
Designing even the simplest nanotech devices is going to be a huge problem
for humans - even a fairly specialized assembler is going to have more
components than a space shuttle, after all. Something like smart matter or
utility fog would have more distinct parts than our current civilization,
which implies that it would also take more effort to design. Until this
bottleneck is broken, progress is going to take place in baby steps, and no
one group will ever be that far ahead of the herd.

> Final question: Do you really trust humans more than you trust AIs? I
might
> trust myself, Mitchell Porter, or Greg Egan. I can't think of anyone else
> offhand. And I'd trust an AI over any of us.

I don't trust any single being with unlimited power. I think we are more
likely to come through the Singularity in decent shape if it is precipitated
by a large community of individuals who are relatively close to being equal,
and will therefore place constraints on one another, than if a single
creature suddenly achieves relative omnipotence.

I'll post a reply on the technical issues tomorrow - not enough time left
today.

Billy Brown
bbrown@conemsco.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:55 MST