Re: Robots in Social Positions (Plus Intelligent Environments)

From: GBurch1@aol.com
Date: Sun Jul 11 1999 - 08:08:22 MDT


In a message dated 99-07-10 18:46:07 EDT, bradbury@aeiveos.com (Robert J.
Bradbury) wrote:

> > From: "Chris Fedeli" <fedeli@email.msn.com> wrote:
> >
> > Billy Brown replied:
> >
> > >1) It won't work. You can only use this sort of mind
> > >control on something that is less intelligent than you are.
> >
> Horse puckies (again). If we take Moravec's three examples
> (1) Deep Blue, (2) The math theorem proving program the
> government reseachers have developed, (3) Moravec's student's
> "Car Driving" program, in all 3 cases I would say that have
> software which is beginning to approach human "intelligence"
> (though lacking self-awareness or a mind per se). If you
> extend these efforts another 10 years in computer evolution,
> then you have hardware/software combinations that do stuff
> "more intelligently", or at least "more skillfully" than humans.
> In these instances it is perfectly reasonable to encode into the
> program "lick my feet". There will be a big industry designing
> & programming sex dolls who look, act, talk, etc. like a former
> lover or spouse (or some famous Hollywood star) but to
> ultimately be under your control.
> [ snip ]
> As more R&D goes into this, it will be harder and harder for
> you to tell the Robot from the real thing. There are
> a host of situations now in which humans "willingly suspend
> disbelief". All it takes is one or two cases where the
> "artificial" seems more interesting than the "natural" and thats
> what you will go with.

Here's the rub, as I see it from a lawyer's perspective. As it becomes
harder to tell the "robot" from the "real thing", the social and legal rules
and structures we have for dealing with the rights and responsibilities of
humans will become stretched to irrationality. Eventually, you will have
recreated the social and legal absurdities of the antebellum South, where
there were "real" people and "slave" people, with two sets of rules.
Finally, you will face the contradictions of a robotic "Jim Crow" approach,
even if you grant "personhood" to synthetics. "Driving Mis Daisy" takes on a
whole new meaning when it's the CAR that is demeaned by being treated as a
second-class person!

> > >2) It is immoral to try.
> >
> It might be immoral to attempt to control another sentient "being"
> but I don't think we have a test for "sentience" yet.

No, but there are scenarios I can envision in which developing such a test
will be a trial of our moral character as traumatic as dealing with slavery
was and race continues to be.
  
> I believe that we can build into a robot these things:
> (a) goal seeking (= to genetic drives), but instead of the goal
> "reproduce & raise children", I substitute the dog drive
> "make my master happy". If I'm crazy I substitute a cat drive :-).
> (b) complex humanish behaviors (necessary to solve goal seeking problems)
> © mood swings (depending on how successful the goal seeking is)
> (d) observe and copy other patterns (3-5 years of TV soap operas
> in my memory banks should cover most of the possibilities :-))
> (e) random creativity (necessary when existing goal seeking
> strategies don't work) - though this would have to be
> constrained in some areas [see below].
> (f) self-awareness of the success or failure of my goal-seeking
> as well as the ability to pass the mirror test
> (g) The ten commandments (or another set of moral codes).
>
> I'm pretty sure that most of this could be done "top down"
> though there would probably have to be a lot of "fuzzy" logic.
>
> Now, this is going to be a very intelligent and fairly human-like
> machine (it is a really big finite state automata). I'm not
> going to have any problem telling it exactly what to do
> since it isn't "sentient" in my book.

I must be crazy to argue with you Robert, but I think you've described a
machine with such subtle complexity that you aren't going to be able to
predict its behavior with sufficient precision that you can relegate it to
the status of an inanimate object, at least. Consider that a large majority
of people in the First World today would grant some "rights" status to
"higher" animals like primates and cetaceans. I believe the same moral sense
that motivates the desire to allow some level of autonomy to those complex
creatures will also motivate at least some people to observe rights of
autonomy for machines of the complexity you describe.
  
> > To give robots Asimov-type laws would be a planned effort to
> > do to them what evolution has done to us - make them capable
> > of co-existing with others.
>
> Yep, probably better than we do, since the moral code can
> be imprinted so as to be irrevocable.

I see a distinction between placing a priori structure on a complex machine
and attempting to impose restrictions on machines that have developed a
certain (yet to be defined) mental relationship to their environment. In
that respect, I may inhabit a moral territory between Billy's and yours,
Robert. However, I believe we WILL have to develop a "Creator's Morality" to
address the use of a priori restrictions on the structure of an AI's mind.

> > When we develop robots that become members of society,
>
> Hold on, who said anything about "members of society".
> Now you've jumped the "sentience", "personhood" and
> "rights" barrier. A robot is my property, it doesn't
> get voting rights.

Billy's comment does beg the question, but that's because he's made clear
that he finds a congruence of a certain level of "intelligence" with
possession of "social status", i.e. as a locus of moral rights. I agree with
him, because I believe to not do so utterly undermines any claim we ourselves
have to be treated as autonomous moral actors.
  
> > If they are to have the capacity of self awareness,
>
> Self-awareness is not "sentience". It's easy to make a
> machine aware of its internal state. A good robot could
> even do a stack trace and tell you exactly why it selected
> a specific behavior in a specific circumstance (something
> most humans can't easily do).

I won't comment on this, other than to say that I have the queasy feeling
that failure to address this question is like the Founders' unwillingness to
address the question of slavery when they were waxing so eloquent about
"inalienable rights". I'm dead serious about this: If we don't take this
problem seriously soon, we may be setting ourselves up for a cataclysm that
could make Gettysburg look like a schoolyard shoving match.

> > then we recognize that their learning and experience will
> > enable them to revise their internal programming just as modern
> > humans have.
>
> Before you let a Robot loose on the street, you are going
> to have to *prove* that it isn't a threat to anyone.
> I suspect that means that if a Robot invents a new behavior
> it is going to have to be approved by an oversight committe
> as "safe". Perhaps once we start thinking about this
> we will discover there are inherently safe creative paths
> that the Robot is allowed and those which are potentially
> risky that are prohibited.

Robert, I think you overestimate the ability of such a powerful technology to
be controlled with well-thought-out prior institutional restraints. As
robotics develops as a business, the wave-front of development will be
international and will cross essentially every industry and aspect of human
life. The "Department of Robotics" you posit would have to have one hell of
a mandate to be able to effectively govern all the possible permutations of
intelligence and environmental interaction. I suspect that it will be "out
of control" to use Kevin Kelly's term, long before any such bureau could
establish effective restraints on initial implementation and
post-implementation development of complex, robotic AIs.

> You are going to have to be careful about this -- if
> a Robot decides humans are unreliably moral and robots
> can be made reliably moral, then the moral thing is to
> correct this problem (and eliminate us...).

My point exactly, especially if we treat really advanced robots purely as
property.
  
> > I agree that we can't prevent intelligent beings from evolving
> > their own moral deas, not because we shouldn't but just because
> > it probably isn't possible.
>
> Aha -- but you shifted from the laws of Robotics to intelligent
> *beings* - since it would seem that a moral system (at least the
> way I think of them) are designed to protect a person's rights
> or "beingness" you haven't said from what their beingness derives.

I think Billy can be excused for fudging a little on what, in my opinion, is
the ultimate question of moral philosophy. Shall we try to do better than
the gentlemen in Philadelphia? I propose that we fail to do so at our
extreme peril.

     Greg Burch <GBurch1@aol.com>----<gburch@lockeliddell.com>
     Attorney ::: Vice President, Extropy Institute ::: Wilderness Guide
      http://users.aol.com/gburch1 -or- http://members.aol.com/gburch1
                         "Civilization is protest against nature;
                  progress requires us to take control of evolution."
                                      -- Thomas Huxley



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:27 MST