Re: Yudkowsky's AI (again)

From: Lee Daniel Crocker (lcrocker@mercury.colossus.net)
Date: Thu Mar 25 1999 - 14:11:14 MST


> As for anyone else trusting den Otter, whose personal philosophy
> apparently states "The hell with any poor fools who get in my way," who
> wants to climb into the Singularity on a heap of backstabbed bodies, the
> Sun will freeze over first. Supposing that humans were somehow uploaded
> manually, I'd imagine that the HUGE ORGANIZATION that first had the
> power to do it would be *far* more likely to choose good 'ol altruistic
> other's-goals-respecting Lee Daniel Crocker...

I wouldn't trust me if I were them. Sure, I can be quite tolerant
of other humans and their goals--even ones I think mildly evil--if
for no other reason than the value of intellectual competition and
the hope of reform. There's also my strong /current/ conviction
that norms are not yet rationally provable, so my certainty of the
superiority of my own goals is in question. Keeping around those
with other goals keeps me honest. But if I were uploaded into a
superintelligence that rationally determined that the world would
be a better place with some subset of humanity's atoms used to a
better purpose, I am more committed to rationality than to humanity.
I am an altruist specifically because it serves my needs--I can't
get what I want except through cooperation with others. If that
condition changes, then it is likely my attitude would change with it.

So does that mean I get to make both of your lists: sane persons
and dangerous persons? :-)

--
Lee Daniel Crocker <lee@piclab.com> <http://www.piclab.com/lcrocker.html>
"All inventions or works of authorship original to me, herein and past,
are placed irrevocably in the public domain, and may be used or modified
for any purpose, without permission, attribution, or notification."--LDC


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:23 MST