From: Samantha Atkins (samantha@objectent.com)
Date: Sat Sep 30 2000 - 01:37:51 MDT
"Eliezer S. Yudkowsky" wrote:
>
> There is. One group creates one mind; one mind creates the Singularity. That
> much is determined by the dynamics of technology and intelligence; it is not a
> policy decision, and there is no way that I or anyone else can alter it. At
> some point, you just have to trust someone, and try to minimize coercion or
> regulations or the use of nuclear weapons, on the grounds that having the
> experience and wisdom to create an AI is a better qualification than being
> able to use force. If the situation were iterated, if it were any kind of
> social interaction, then there would be a rationale for voting and laws -
> democracy is the only known means by which humans can live together. But if
> it's a single decision, made only once, in an engineering problem, then the
> best available solution is to trust the engineer who makes it - the more
> politics you involve in the problem, the more force and coercion, the smaller
> the chance of a good outcome.
I don't believe in nuking people I disagree with or making such threats
either. But I will point out that there are/will be many teams
attempting to create an AI capable of becoming SI. What makes you think
one of the teams with white hats versus one with black hats (or simply
gray) will get there first? Also I don't believe that all the
replication arguments put forth ably by Eugene are easily dismissed. In
short I don't believe a single AI will be all of any significance that
we get.
- samantha
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:18 MST