From: Robin Hanson (hanson@dosh.hum.caltech.edu)
Date: Tue Sep 03 1996 - 16:11:43 MDT
N.Bostrom@lse.ac.uk writes:
>About assuring that an >AI won't harm humans, Robin Hanson said that
>the police would hunt it down if it misbehaved. ...
>speculate about a machine that would be a million times faster than
>any human brain, and with correspondingly great memory capacity.
>Could such a machine, given some time, not
>manipulate a human society by subtle suggestions that seem
>very reasonable but unnoticeable affects a general change in
>attitude and policy? ...
>My contention is that with only one full-blown >AI in the world, if it
>were malicious, the odds would be on the side that it could annihilate
>humanity within decades.
Sure, given one mean super-AI, and rest of the world far behind, we
would be at its mercy. Simliar fears come from one person in control
of any vastly superior technology, be it nanotech, nukes, homemade
black holes, whatever. But realistically, any one AI probably won't
be too far ahead of any other AI, so they can police each other.
Robin Hanson hanson@hss.caltech.edu http://hss.caltech.edu/~hanson/
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:44 MST