RE: Posthuman mind control (was RE: FAQ Additions)

From: Billy Brown (bbrown@conemsco.com)
Date: Thu Feb 25 1999 - 11:17:32 MST


Michael S. Lorrey wrote:
> Billy Brown wrote:
>
> > Here we have the root of our disagreement. The problem rests on an
> > implementation issue that people tend to gloss over: how exactly do you
> > ensure that the AI doesn't violate its moral directives?
>
> Its actually rather straight forward. There are well publicized
experiments
> where people were given electrical impulses to their brains, which made
them
> do something, like scratch themselves, etc. In every case, the
> test subjects stated that they felt that they were the ones in control,
that
> they decided to move thus, and were able to rationalize very good reasons
why
> they moved thus. There was absolutely no sensation of outside control.
>
> Thus, any moral directives we hardwire into an AI it will consider to be
> such a part and parcel of its own existence, that it could not
> conceive that it would be the same being if we took it away. It would see
any
> attempt to remove those directives as an attempt at mind control, and
would
> defend itself against such intrusion. So long as one of its directives
were
> to not itself remove any of its own prime directives, it would never
consider
> such a course of action for itself.

Well, yes, that is the intelligent way to set up a mind control system.
However, if you read the rest of my post you'll see that this isn't what we
were talking about.

Nick Bostrom was arguing in favor of programming a fundamental moral system
into the AI, and then turning it loose with complete free will. My argument
is that this is very unreliable - the more complex a mind becomes, the more
difficult it is to predict how its moral principles will translate into
actions. Also, an intelligent entity will tend to modify its moral system
over time, which means that it will not retain an arbitrary set of
principles indefinitely.

Now, I don't think that ongoing mental coercion is a good idea either, but
that's a different line of argument. I would expect that you could devise
an effective scheme for controlling any static mind, so long as it isn't too
much smarter than you are. If you want to control something that is
self-modifying you've got big problems - how do you design a control
mechanism that will remain effective no matter what your creation evolves
into?

> This brings up the subject of limits. As extropians, we believe in there
being
> little or no limits on human beings, outside of a limit on interfering
with
> others harmfully. We must ask, "Does this sort of moral engineering fit
with
> extropy?" I say it does, for only one reason. We are talking about design
specs
> of beings not yet in existence, much as we could talk about possible
genetic
> codes of children we might have. We are not talking about altering beings
> already in existence. Altering beings already in existence, against their
will,
> is obviously against extropy. Altering the design of a being not yet in
> existence is not against extropy. Once a design altered individual comes
into
> existence, its sense of self is derived from its design. That we were able
to
> finely control what type of individual came into existence is no more
against
> extropy than in controlling what the genetic code of our
> children will be.

An interesting, and rather disturbing, point. I think I'll reserve judgment
on that one for a bit. Does anyone else have an opinion about it?

Billy Brown, MCSE+I
bbrown@conemsco.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:08 MST