From: Nick Bostrom (bostrom@ndirect.co.uk)
Date: Thu Feb 25 1999 - 14:48:56 MST
Billy Brown wrote:
> Michael S. Lorrey wrote:
> So long as one of its directives
> were
> > to not itself remove any of its own prime directives, it would never
> consider
> > such a course of action for itself.
>
> Well, yes, that is the intelligent way to set up a mind control system.
> However, if you read the rest of my post you'll see that this isn't what we
> were talking about.
>
> Nick Bostrom was arguing in favor of programming a fundamental moral system
> into the AI, and then turning it loose with complete free will.
I'm not sure Michael and I are not saying the same thing in different
words. Anyway, there is one thing in Billy's comment that I
want to pick on. What Billy calls "mind control system" could
in my opinion better be called "motivation system". Every agent needs
a motivation system, and just because its motivation system was
deliberately designed by another being rather than happened
accidentally (due to genetical and environmental factors) doesn't
make it any less part of what that agent is. It's not an external
coersion, it is part of its nature --"Don't change me, don't cancel
my love for human beings. This is who I am and this is what I want to
be. I know I would not have had these desires and values if my
constructors hadn't made my goal-module that way; so what. You humans
would not have had the values you have if evolution hadn't made your
goal-modules the way they are. Free will does not require that one
somehow creates one's values out of empty air; to do so would be
totally random. No, you start with what is given to you -- your
values, your preferences -- and those are the criteria by which you
judge whether a proposed change would be an improvement.
Nick Bostrom
http://www.hedweb.com/nickb n.bostrom@lse.ac.uk
Department of Philosophy, Logic and Scientific Method
London School of Economics
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:09 MST