RE: Robots in Social Positions (Plus Intelligent Environments)

From: Chris Fedeli (fedeli@email.msn.com)
Date: Sat Jul 10 1999 - 09:00:11 MDT


Mark Phillips wrote:

>>My intuition though, goes for a kind of
>>modified combination of "Asimov's Laws
>>of Robotics" plus something closely resembling
>>a robust set of liberal/civil libertarian protocols for
>>humans vis-a-vis such systems.

Billy Brown replied:

>It seems to me that you are suggesting that we
>implant such "Laws" into their code as moral
>coercions. We've argued over that before on this list,
>and the summary position of my camp is as follows:

I'm sorry I missed this earlier discussion, but when I
see the term "coercion" I have to chime in. All of law is
coercion, the point being if behavior didn't need to be
coerced there would be no need for laws, and we would
have none. Whether we are coerced by external threats
or deeply ingrained moral inclinations, there is no behavior
that isn't influenced by rules which take into account the
the world outside our own minds.

>1) It won't work. You can only use this sort of mind
>control on something that is less intelligent than you are.

Well... Evolution by natural selection has ingrained scores
of complicated rules for social interaction into human
brains.
True, the advent of human culture has created some elbow
room whereby those rules are revised and amended in ways
the selfish genes never intended (called learning), but this
recent development has not amounted to a wholesale rewriting
of our moral programming. As culture and experience are
absorbed by our newly conscious minds, these memes interact
in a formulaic way with the tons of evolutionary garbage
which still occupies most of our lobe space.

>2) It is immoral to try.

To give robots Asimov-type laws would be a planned effort to
do to them what evolution has done to us - make them capable
of co-existing with others. We are already finding portions
of our evolutionary moral heritage distasteful, and when we
do we
revise it using the tools of culture such as law (ie,
outlawing rape
and murder, two perfectly respectable moral choices from an
evolutionary perspective). In the future we'll continue to
do
this using the more powerful tools of neuroscience and the
like.

When we develop robots that become members of society, they
will need the kind of moral frame of reference that we take
for
granted. If they are to have the capacity of self
awareness,
then we recognize that their learning and experience will
enable
them to revise their internal programming just as modern
humans
have.

>3) It is likely to be unnecesary. If they are really that
>smart, they will know more about morality than we do.
>The best course of action is simply to start them off with
>a firm knowledge of human morality, without trying to
>prevent them from evolving their own (superior) versions
>as they learn.

Superior in the Nietzschean sense? Set a default option for
moral nihilism on our future AI's and thats just what we'll
get.
Nothing is more superior (from an individual's perspective)
than the ability to be caluculating and wholly without
compunction.

I guess I'm really not too far from you on this. I agree
that we
can't prevent intelligent beings from evolving their own
moral
ideas, not because we shouldn't but just because it probably
isn't possible.

Anyway, social systems can be buit for amoral sentients -
modern police states alredy exist to accomade an
increasingly
cultured and "morally flexible" populace.

Chris Fedeli



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:26 MST