Re: Yudkowsky's AI (again)

From: Michael S. Lorrey (mike@lorrey.com)
Date: Thu Mar 25 1999 - 09:24:58 MST


den Otter wrote:

> ----------
> > From: Eliezer S. Yudkowsky <sentience@pobox.com>

> > (Earliest estimate: 2025. Most realistic: 2040.)
> > We're running close enough to the edge as it is. It is by no means
> > certain that the AI Powers will be any more hostile or less friendly
> > than the human ones. I really don't think we can afford to be choosy.
>
> We _must_ be choosy. IMHO, a rational person will delay the Singularity
> at (almost?) any cost until he can transcend himself.

Which is not practical. Initial uploads will be expensive. As cost of the
technology drops, use frequency increases thus applying economis of scale. Since
you are talking about guarding against even ONE Power getting there before you,
then no one will ever upload. Someone has to be first, if it is done at all. A
number of someones for the test phase of the technology, then those that can
afford the cost, then as those individuals have an impact on the economy, others
can be bootstrapped.

It is all a matter of trust. Who do you trust?

What you want to guard against is unethical persons being uploaded. You must ask
yourself after careful investigation and introspection if any one of the first
could be trusted with god like powers. If not, those individuals must not be
allowed to upload. Interview each with veradicators like lie detectors, voice
stress analysers, etc. to find out a) what their own feelings and opinions about
integrity, verbal contracts, etc are, and b) have them take something like an
oath of office (the position of "god" is an office, isn't it?).

The transhuman transition period may be the first time when we can get a
practical merit system of citizenship in place, where all who wish to belong to
the new polity must earn their place and understand their responsibilities as
well as their rights.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:23 MST