Robin Hanson wrote:
> You're really talking about controlling a super intelligence
> who suddenly became much much smarter than us, and who is the
> only such creature around (a scenario I find quite unlikely).
> If there are many such creatures, they may control each other.
Even if there were many SI:s around, I don't see how that would pose
a less threat to "human" interest than if there is just one SI.
An exception to this is if the individual SI:s are constructed in
such a way that they tend to help humans (maybe through having
appropriate values programmed in). Then one could guard against
accidental mistakes and mutations by having several SI:s keeping
track of one another.
> If they develop slowly, we can experiment with different
> approaches and see what works.
The outcome we are trying to prevent is that an SI takes over the
world. We can test various AI:s by releasing them and see what
happens. If nothing happens, that doesn't prove that there is no risk
-- maybe the AI just wasn't powerful enough (or maybe it's working on
a plan that takes a long time to execute). If, on the other hand, the
experiment has a positive result, then it's too late to do anything
about it: the AI has already taken over the world. Field studies are
therefore essentially either unhelpful or disastrous in this game.
It doesn't make any difference if AI:s develop slowly, except that it
gives us more time to think about the consequences.
> >Controlling a superintelligence by trade
> >Why pay for something you can get for free? If the superintelligence
> >had the power to control all matter on earth, why would it keep such
> >irksome inefficiencies as humans and their wasteful fabrics?
>
> Now you've assumed that this intelligence is so "super" that we have
> nothing to offer it that it might want. Even for a creature who is
> as smart as the 10,000 smartest humans now on Earth, this seems unlikely.
But couldn't the SI get what it wants from these humans without
paying any fees? If it wants the matter that makes up their bodies
and their houses -- it sends in nanomachines to chew them up; if it
wants their knowledge and their skills -- it sends in nanomachines to
disassemble and scan their brains. If it wants their intellectual
power, it could rerender them into microchips.
This presupposes that the SI is has the physical power to do these
things should it so decide. What we're discussing is what ways there
are of preventing an SI from being in a position where it could
obtain the total world power that would enable it to carry out such
operations. If we assume that the SI has to obey laws and
legislation, then there could of course well be reasons for it to
trade with humans (at least humans who own non-human capital).
_____________________________________________________
Nicholas Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
Received on Sat May 16 02:33:22 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST