From: Billy Brown (bbrown@conemsco.com)
Date: Wed Feb 24 1999 - 07:36:42 MST
hal@rain.org wrote:
> A very interesting question, independent of the posthuman question. What
> degree of control is moral for parents to exert over their children? More
> generally, what is moral when creating a new intelligence?
>
> One answer is that, as long as the resulting person is happy with his lot,
> anything is moral. You can create intelligences that are stupid or smart,
> flexible or constrained, but as long as they are happy it is OK.
>
> The opposite extreme would suggest that only intelligences with maximal
> extropy should be created: flexible, intelligent, creative, curious minds.
> Such people would have a wide range of possible behaviors. They would
> perhaps face greater dangers, but their triumphs would be all the more
> meaningful. Doing anything less when creating a new
> consciousness would be wrong, in this view.
>
> I am inclined to think that this latter position is too strong. There
> would seem to be valid circumstances where creating more constrained
> intelligences would be useful. There may be tasks which require
> human-level intelligence but which are relatively boring. Someone has to
> take out the garbage. Given an entity which is going to have limitations
> on its options, it might be kinder to make it satisfied with its life.
I am inclined to agree with you, but for different reasons. IMO, the fact
that it might be useful to deliberately create a sub-optimal intelligence
would not carry much weight in a moral argument. However, it is crucial to
recognize that we do not know what an optimal intelligence would be. We can
make pretty good guesses, especially for humanlike minds, but we can't
really prove that we aren't just passing along our own biases. Consequently,
I would argue that the minimum moral standard should be this:
If you are going to create an actual person (as opposed to an animal, or a
special-purpose device), you have a moral obligation to give them the
ability to reason, to learn, and to grow. Any sentient being should be
capable of asking questions, seeking the answers, and applying those answers
to their own thoughts and actions.
This standard still gives wide latitude in the choice of personality,
intelligence level, ease of self-modification, etc. However, it would
forbid any truly permanent form of mind control. To enforce a fixed moral
code you must create a mind that is incapable of thinking about morality, or
(more likely) that is capable of thinking about it but can never change its
mind.
> Generalizing the case Billy describes, if the universe is a dangerous
> place and there are contagious memes which would lead to destruction, you
> might be able to justify building in immunity to such memes. This limits
> the person's flexibility, but it is a limitation intended ultimately to
> increase his options by keeping him safe.
The most effective way to immunize an AI against a 'dangerous meme' is
education: explain what the meme is about, why some people believe it, and
why it is wrong. If your reasoning is correct, this will also allow the AI
to resist many similar memes that are based on similar false reasoning. If
your reasoning is wrong, and the meme is in fact correct, the AI will still
be capable of recognizing the truth when she encounters it.
> How different is this from the religious person who wants to keep
> his own child safe, and secure for him the blessings of eternal life?
> Not at all different. Although we may not agree with his premises,
> given his belief system I think his actions can be seen as moral..
I find it interesting to note that everyone I know who follows such a
religion would find the idea horrifying. Religions that include eternal
rewards and punishments also tend to stress the importance of free will.
On a more fundamental level, this is the old question of whether immoral
means can be used to achieve a moral end. If there were no other possible
way of achieving the desired goal then you might be able to defend the
practice. As it turns out, however, that can't happen in this case.
Let us suppose that I believe X is true, and I am very concerned about
ensuring that my posthuman descendants also believe X to be true. I
therefore program the first posthuman AI with a full understanding of all
available arguments on the subject of X, with special emphasis on those that
lead me to believe X is true. I also make sure that the AI understands its
own fallibility, and is willing to rely on information provided by humans in
the absence of conflicting evidence.
What will happen when the AI 'grows up' and becomes an autonomous adult? It
is (by definition) far more intelligent than I am, and it knows everything I
do about X, so its decisions on the subject will always be at least as good
as my own. If X is in fact true, the AI will never change its mind. If,
OTOH, X is false, the AI may eventually realize this and change its mind.
However, this is not a bad thing! It can only happen if the AI happens upon
new information that nullifies all of my objections - in other words, I
would also be convinced.
Thus, if we are willing to presume that our conclusions should be based on
reason, and that there is some rational basis for preferring our own moral
system over another one, then there is no need to resort to more than
education.
Billy Brown, MCSE+I
bbrown@conemsco.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:07 MST