From: den Otter (neosapient@geocities.com)
Date: Thu Mar 25 1999 - 11:11:45 MST
----------
> From: Michael S. Lorrey <mike@lorrey.com>
> den Otter wrote:
>
> > ----------
> > > From: Eliezer S. Yudkowsky <sentience@pobox.com>
>
> > > (Earliest estimate: 2025. Most realistic: 2040.)
> > > We're running close enough to the edge as it is. It is by no means
> > > certain that the AI Powers will be any more hostile or less friendly
> > > than the human ones. I really don't think we can afford to be choosy.
> >
> > We _must_ be choosy. IMHO, a rational person will delay the Singularity
> > at (almost?) any cost until he can transcend himself.
>
> Which is not practical. Initial uploads will be expensive.
Indeed they will, which is why I keep stressing the importance
of wealth (accumulating as much wealth as possible, by any
practical means, would arguably be the single most useful
thing any transhuman organization could do).
> As cost of the
> technology drops, use frequency increases thus applying economis of scale.
I strongly suspect that true upload-grade tech will never become
mainstream. Expensive prototypes will emerge is one or several
labs, quickly followed by the Singularity (assuming that the
earth doesn't get destroyed first or that AIs transcend before any
human).
Since
> you are talking about guarding against even ONE Power getting there before you,
> then no one will ever upload. Someone has to be first, if it is done at all.
And someone *will* be first; the one with the best combination of
knowledge, wealth, determination and blind luck. Of course, instead
of a single individual it could also be a (small) contract group which
transcends simultaneously (see below).
> It is all a matter of trust. Who do you trust?
No-one of course (I watch the X-Files every week, you know). Hence
the proposal to create a Singularity group; combining brain power
and resources increases everyone's chances. If all works out ok
you'll all transcend simultaneously, thus (theoretically) giving
everyone an equal chance. I have no idea what would happen
next, but that wouldn't be a human problem anymore. I think
this is just about the only solution that would have any chance
of success.
> What you want to guard against is unethical persons being uploaded.
Side note: who decides what is "ethical"?
You must ask
> yourself after careful investigation and introspection if any one of the first
> could be trusted with god like powers. If not, those individuals must not be
> allowed to upload. Interview each with veradicators like lie detectors, voice
> stress analysers, etc. to find out a) what their own feelings and opinions about
> integrity, verbal contracts, etc are, and b) have them take something like an
> oath of office (the position of "god" is an office, isn't it?).
This would probably only filter out the bad liars and openly unstable
persons, and leave the really dangerous types; any lie detection
system can be tricked given enough effort (funds, intelligence etc.),
the investigators can be bribed and why would the rich & powerful
consent to such a screening (and how could you be sure that you
didn't miss anyone?) Last but not least, the ethics of a post- or even transhuman
could change in dramatic and from the human pov totally unpredictable ways. Even the
most altruistic, benevolent person could
easily turn into a genocydal Power (and vice versa, but would you bet
your _life_ on that?)
> The transhuman transition period may be the first time when we can get a
> practical merit system of citizenship in place, where all who wish to belong to
> the new polity must earn their place and understand their responsibilities as
> well as their rights.
You could be right of course, but think that there's at least an equally
great chance that the transition period will simply be a matter of survival
of the fittest.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:23 MST