Robin writes:
> >> If they develop slowly, we can experiment with different
> >> approaches and see what works.
> >
> >The outcome we are trying to prevent is that an SI takes over the
> >world. We can test various AI:s by releasing them and see what
> >happens. If nothing happens, that doesn't prove that there is no risk
> >-- maybe the AI just wasn't powerful enough (or maybe it's working on
> >a plan that takes a long time to execute). If, on the other hand, the
> >experiment has a positive result, then it's too late to do anything ...
>
> Again you're only considering all or nothing alternatives. Power-hungry
> SIs should have more than two modes of behavior: perfect submission vs. take
> over the world.
The two modes I was considering were: (1) take over the world, or (2)
don't take over the world. The latter case can be subdivided into (i)
total submission (ii) competition while respecting human property
rights; and there are other combinations as well. But my point was
that we don't want to determine through a flield study whether (1)
will happen or not. I think we are naive if we let a
superintelligence lose with the intention of stopping it if it starts
to appear too power-hungry (creating monololies etc.). For one thing,
the coup could happen much to quickly and subtly for us to be able to
do anything about it. For another, the superintelligence would
anticipate our action and adopt a strategy to obtain power without
triggering our alarms. It wouldn't make an overt move until it was
quite certain to succede. So we would have little chance of gaining
information through our trial that would lead us to choose to shut
the SI down.
_____________________________________________________
Nicholas Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Tue May 19 01:29:19 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST