bostrom@ndirect.co.uk ("Nick Bostrom") writes:
>We could restrict its output channel of the superintelligence to one
>Would this method -- severely restricting its output channels -- be a
>reliable way of ensuring long-term control over the superintelligence?
The biggest problem with this is the "We could restrict" assumes much
more centralized control over research than I think possible. What stops
a lone hacker from using a different approach?
>Controlling a superintelligence by trade
>
>Why pay for something you can get for free? If the superintelligence
>had the power to control all matter on earth, why would it keep such
>irksome inefficiencies as humans and their wasteful fabrics? Only if
>it had a special passion for humans, or respect for the standing
>order, for we would certainly lack any instrumental value for a
>superintelligence. The same holds for uploads, though the cost of
>maintaining us in that form would be much smaller, and we could avoid
>some limitations in our biological constitution that way.
Multiple superintelligences are likely to use property rights
to control scarce resources.
The best way to implement such rights is likely to use simple rules
such as "whatever has the key to X owns X", e.g. disk storage space is
likely to be owned by whatever entity knows the password to the account
associated with that space. Trying to add rules distinguish between
intelligent owners and stupid ones creates much complexity, and as long
as there is a continuum of intelligence levels the majority are unlikely
to be confident that they would fall on the desirable side of an intelligence
threshold if one were established.
A superintelligence that tries to get around the basic rules of an operating
system risks being exiled by its peers because it is untrustworthy. This
isn't all that different from the reasoning that prevents you and me from
stealing from mentally incompetent humans.
This reasoning works best when dealing with a variety of uploaded humans
and other digital intelligences. It probably fails if there is just one
superintelligence dealing with biological humans.
>It is not at all reasonable to suppose that the human species, an
>evolutionary incident, constitutes any sort of value-optimised
>structure, -- except, of course, if the values are themselves
>determined by humans; and even then it is not unlikely that most of us
>would opt for a gradual augmentation of Nature's work such that we
>would end up being something other than human. Therefore, controlling
>by trade would not work unless we were already in control either by
>force or value selection.
I don't understand this - the last sentence doesn't have any obvious
connection to the prior one.
>1. By creating a variety of different superintelligences and
>observing their ethical behaviour in a simulated world, we should in
>principle be able to select a design for a superintelligence with the
>behavioural patterns that we wish. -- Drawbacks: the procedure might
>take a long time, and it presupposes that we can create VR situations
>that the SI will take to be real and that are relevantly similar to
>the real world.
The longer it takes, the more people will be tempted to bypass this
procedure.
>2. If we can make a superintelligence that follows instructions, then
>we are home and safe, since a superintelligence would understand
>natural language. We could simply ask it to adopt the values we
>wanted it to have. (If we're not sure how to express our values, we
>could issue a meta-level command such as "Maximize the values that I
>would express if I were to give the issue a lot of thought."
The only way that I can imagine it getting enough data about human
values to figure out what would happen if I gave the issue more thought
than I actually have would involve uploading me. If uploading is possible
at this stage, then uploading is a better path to superintelligence than
the others you've considered. If not, I doubt your superintelligence will
know whether it is maximising human values.
-- ------------------------------------------------------------------------ Peter McCluskey | Critmail (http://crit.org/critmail.html): http://www.rahul.net/pcm | Accept nothing less to archive your mailing listReceived on Thu May 21 04:12:53 1998
This archive was generated by hypermail 2.1.8 : Tue Mar 07 2006 - 14:45:30 PST