Nick writes:
>Even if there were many SI:s around, I don't see how that would pose
>a less threat to "human" interest than if there is just one SI.
The same way that a division of powers helps citizens keep their
government in line. A single autark would be very hard to control,
but divide up power the right way, and citizens can even get the
impression that they run things.
>> If they develop slowly, we can experiment with different
>> approaches and see what works.
>
>The outcome we are trying to prevent is that an SI takes over the
>world. We can test various AI:s by releasing them and see what
>happens. If nothing happens, that doesn't prove that there is no risk
>-- maybe the AI just wasn't powerful enough (or maybe it's working on
>a plan that takes a long time to execute). If, on the other hand, the
>experiment has a positive result, then it's too late to do anything ...
Again you're only considering all or nothing alternatives. Power-hungry
SIs should have more than two modes of behavior: perfect submission vs. take
over the world.
Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884
140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614
Received on Mon May 18 23:10:03 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST