From: Dan Clemmensen (dgc@shirenet.com)
Date: Tue Sep 03 1996 - 17:47:30 MDT
N.Bostrom@lse.ac.uk wrote:
>
>
> My contention is that with only one full-blown >AI in the
> world, if it were malicious, the odds would be on the side
> that it could annihilate humanity within decades.
>
I feel that the effects of a truly malicious >AI would be much more
dramatic.
An >AI can easily agument its own intelligence by adding computing
capacity and
by other means that it will be able to discover or develop, by applying
its
intelligence. This is a rapid feedback mechanism. Thus, as soon as a
moderately
inventive AI comes into existance, it can become even more intelligent.
If the
AI has the goal of destroying humanity, it would be able to do so within
weeks, not decades. Moreover, unless the AI has the active preservation
of humanity as a
goal, it's likely to destroy humanity as a side effect.
This same argument applies to any SI which is capable of rapid
self-augmentation, not
just a straight AI. Since I think that any SI likely to be developed in
the near future
will have a large computer component, it will be capable of rapid
self-augmentation.
My hope is that the SI will develop a "morality" that includes the
active preservation
of humanity, or (better) the uplifting of all humans, as a goal. I'm
still trying to
figure out how we ( the extended transhumanist community) can further
that goal.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:44 MST