From: Chris Fedeli (fedeli@email.msn.com)
Date: Sun Jul 11 1999 - 08:41:46 MDT
Eliezer S. Yudkowsky wrote:
>>recent development has not amounted to a
>>wholesale rewriting of our moral programming.
>Whose moral programming are we talking
>about? Mine is fairly well rewritten - I wrote
>the specs for an AI and I try to make mine
>match the specs, to the limits of what you
>can do with software alterations.
I was referring to people in general. The imprint of our
evolutionary heritage is still very visible the behaviors
and choices of 20th century humans, although the cultural
evolution of the past few thousand years has begun to create
many subtle and interesting variances.
We should be cautious in evaluting how thoroughly any of us
have separated ourselves from much of this moral
programming, which includes all the broad imperatives of
survival. Suicide is one way to prove that you're no slave
to your genes, but so is using condoms and remaining
childless. We may find independent, *philosophical*
rationales for all of our actions, but that doesn't mean
that the genenetic legacy isn't still still playing an
influential, covert role in our many of choices (which isn't
necessarily bad).
>>To give robots Asimov-type laws would
>>be a planned effort to do to them what
>>evolution has done to us - make them
>>capable of co-existing with others.
>Wrong-o. Nothing that can learn - nothing
>that has a data repository that can
>change - can have an irrevocable moral
>code.
Certainly not irrevocable, but highly resistant to drastic
change. A "conservative" set of moral instincts might be
responsible for the failure of the human race to have
committed mass suicide by now. Where once the proto-human
ate, surived, and reproduced for no conscious reason, now we
stare at the sky and ask "what's it all about?" to no
certain answer. So we create meaning systems like religion
that give us the intellectual justification to carry on.
Would we have bothered doing the same absent a billion years
of genetic survival instincts?
>We aren't talking about human moral
>systems, which derive from game theory
>and evolution. We're talking about AI moral
>systems, which are quite different.
I'm not sure I understand that comment. AI morality will be
designed and programmed by conscious humans, unlike our own
morality which was designed mostly by the unconscious
workings of mother nature. This is a big advantage, since
we'll be able to get rid of vestigal instincts like "use
agression against conspecifics to ensure the maximum
proliferation of your offspring," along with many other
outdated tendencies that linger to find new outlets of
expression in the modern world.
But in creating a moral system de novo (for AI's, for
ourselves, or for both) we can only put into it what we
bring to the table. Our knowledge of our own moral
programming and its strenghts and weaknesses will have to be
the jumping off point for any new design.
Chris Fedeli
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:27 MST