From: N.Bostrom@lse.ac.uk
Date: Tue Sep 03 1996 - 12:47:14 MDT
About assuring that an >AI won't harm humans, Robin Hanson
said that the police would hunt it down if it misbehaved.
OK, let's assume that an >AI is positively maliscious. It is
easy to see what would happen to a moderate >AI in a
laboratory if it misbehaved. It would be switched off. It
could be more problematic if it were out there in the real
world, perhaps with the responsibility for a banking system.
Economic losses could be great, but we would hardly risk
total destruction unless we gave it unrestricted power over
the US nuclear arsenal or such.
But when transhumanists talk about >AI they hardly mean a
moderate >AI -like a very brilliant human and then some. We
speculate about a machine that would be a million times
faster than any human brain, and with correspondingly great
memory capacity. Could such a machine, given some time, not
manipulate a human society by subtle suggestions that seem
very reasonable but unnoticeable affects a general change in
attitude and policy? And all the time it would look as if it
were a perfectly decent machine, always concerned about our
wellfare...
How likely is it that a malicious >AI could bring disaster
to a human society that were initially determined to take
the necessary precautions?
Society is a quasi-chaotical system: a small perturbation of
initial conditions will often lead to large unexpected
consequences later on. But there are also regularities and
causal relations that can be to some extent predicted. The
manipulative power of an >AI would depend on the existence
of such sociological regularities that would be obvious to
the >AI but would look like irrelevant coincidence to human
observers. It is a question of how much sociology and
psychology there is to be discovered between what we know
now and what an >AI could learn in the course of few years.
Exactly how much manipulation it would take depends on how
close we are to the nearest road to disaster. Perhaps we are
close. A deadly virus that would happen to be produced as a
by-product of some medical experiment; a little incident
that would lead to escalating hatred between China and USA;
the list goes on...
My contention is that with only one full-blown >AI in the
world, if it were malicious, the odds would be on the side
that it could annihilate humanity within decades.
Nicholas Bostrom n.bostrom@lse.ac.uk
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:44 MST