Re: Revolting AI (was economics of star trek)

From: Simon McClenahan (SMcClenahan@ATTBI.com)
Date: Wed Mar 06 2002 - 12:08:03 MST


From: "Alex Ramonsky" <alex@ramonsky.com>

> Well, yeh, but, how do we jump the gap? I mean, you use your computer as a
> slave now, don't you?

Like Eliezer has already replied with a reference to his Friendly AI, my
computer is a tool, not a sentient being that is being oppressed.

As a matter of fact, I believe that even tools should be treated with
respect. I keep my computer "happy" (where its happiness is defined by how
effective my utilization of it is) by defragging its HD, regular virus
scans, cool desktop, clean monitor, etc. I feel sad for it when I can't
commit the time to replace Windows or add Linux to the system :-) I think
anthropomorphism can be an effective tool maintenance method :-) Given that
I consider my current computer a sentient being that deserves the respect I
give it, I now have a moral obligation to care for it, as opposed to
enslaving it. The Categorical Imperative
http://www.utm.edu/research/iep/c/catimper.htm states "a given action is
morally correct if when performing that action we do not use people as a
means to achieve some further benefit, but instead treat people as something
which is intrinsically valuable." Of course I substitute "people" with my
anthropomorphized computer. This ethic has worked for me, treating tools
with the similar respect as people or animals.

> etc...what would you really do, if you turn your computer on one day and
> tell it to do your accounts and it says, 'bugger off I'm playing quake'?
> Would you be happy with that?

Of course not. Assuming it could be reasoned with, I would determine the
cause of this action. Maybe it's actually right in playing Quake instead of
doing my accounts. I can only tell from reasoning with it. If I can't reason
with it, then I must use my powers to correct what I believe is the right
action, possibly by turning off its power, debugging, or in the worst case
destroying it completely.

> We are so used to computers and robots being programmed to simply
> 'obey'...how are we going to draw a line and say, 'this machine has
sentient
> rights and this one doesn't, even though they look exactly the same?'

My personal solution is anthropomorphization, i.e. all the machines have
sentient rights. The decision to enslave an entity carries far too much risk
IMO, so I default to respect with reasoning. Unreasonable entities with
negative actions must be overpowered, probably ultimately with violence.
This applies to hate organizations, as well as life or death situations such
as rock-climbing. The decision to use violence should be considered when
using Game Theory to choose course of action.

> The sentient rights arguement is so overworn lately I'm not going to
invoke
> it here. But I suggest we think very hard about the outward appearance of
> anything containing AI proper, because otherwise, if something looks like
an
> ordinary computer, it is going to be treated like an ordinary computer,
> regardless of what's inside.

With pervasive, ubiquitous computing, the computer machine will have an
omnipresence anyway. Pointing to a box and identifying it as a computer and
classifying it in the same way as your smart VCR, smart cell phone, smart
digital watch, ATM's, robots, etc. is going to be futile. Most people when
they point to a "computer" point to the monitor instead of the CPU, because
the monitor actually does something that they can interact with.

> I know this problem so well, because everybody treats me as a human,
because
> I look like one.

Are you a drop-dead gorgeous beautiful person? I bet people would treat you
differently, even assume different rights, if you were beautiful, versus
some slacker punk with a bad haircut and smelly breath. "Human" is such a
broad category, just like "computer" is.

> RU is such a difficult concept to apply in real life where the
circumstances
> of each individual are in constant change. If slavery is defined as
'making
> someone do something against their will for your benefit', then every
child
> who gets sent to school and doesn't want to go is a slave. Society is
> forcing the child to do unpaid work for over a decade. How many children
> have a say in whether or not they do that?

No, we force children to go to school for their own good, and like the
Categorical Imperative implies, we also value children. They have the right
to learn how to live in the world by following their parents' or societies
ethical models. Just like adults. Just like trying to make a pet dog take a
bath. And IMO, just like leaving Windows installed so the computer can be
useful and serve its own purpose (a.k.a. my purpose) in "life".

cheers,
    Simon



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:48 MST