From: Nick Bostrom (bostrom@ndirect.co.uk)
Date: Wed Feb 24 1999 - 13:34:53 MST
Billy Brown wrote:
> If you are going to create an actual person (as opposed to an animal, or a
> special-purpose device), you have a moral obligation to give them the
> ability to reason, to learn, and to grow. Any sentient being should be
> capable of asking questions, seeking the answers, and applying those answers
> to their own thoughts and actions.
Does this mean you think that no animals are sentient? Sounds
implausible to me.
> This standard still gives wide latitude in the choice of personality,
> intelligence level, ease of self-modification, etc. However, it would
> forbid any truly permanent form of mind control. To enforce a fixed moral
> code you must create a mind that is incapable of thinking about morality, or
> (more likely) that is capable of thinking about it but can never change its
> mind.
No, at least that is not what I am proposing. Let it be able to think
about morality. Let it also be able to change its fundamental values.
If I am right then that won't matter, because it will not *want* to
change them. (I'm almost tempted to defin a "fundamental value" as: a
preference that you would not want to change.) What I am suggeting is
that any SI we build has repect for human rights as a fundamental
value. As long aswe make sure it has that value, then we need have
nothing to fear. It will go about its business and perhaps transform
itself into a power beyond all human understanding, but it would not
harm us humans, because it would not want to harm us. Maybe speaking
of an "inviolable moral code as a core element of its programming"
conjures up the wrong conotations -- as if it were some form of
coercion going on. I see it simply as selecting one type of value
(human-friendly) rather than another (indifferent or hostile).
The value-selection process might not be that explicit. Maybe as
Moravec thinks, the SI will grow out of robot factories. Since robot
factories that produce nice robots will tend to proliferate more than
ones that produce nasty ones, natural selection could favour
human-friendly values. The nasty ones, no one would want to buy.
Except if you had malicious intents or didn't care about the risks a
badly programmed robot would pose to other humans. And that is what
should be prohibited, especially with top-range superintelligent
robots, since they could cause so disproportionally much harm.
Nick Bostrom
http://www.hedweb.com/nickb n.bostrom@lse.ac.uk
Department of Philosophy, Logic and Scientific Method
London School of Economics
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:08 MST