From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sat Jul 10 1999 - 15:31:00 MDT
> From: "Chris Fedeli" <fedeli@email.msn.com> wrote:
>
> Billy Brown replied:
>
> >1) It won't work. You can only use this sort of mind
> >control on something that is less intelligent than you are.
>
Horse puckies (again). If we take Moravec's three examples
(1) Deep Blue, (2) The math theorem proving program the
government reseachers have developed, (3) Moravec's student's
"Car Driving" program, in all 3 cases I would say that have
software which is beginning to approach human "intelligence"
(though lacking self-awareness or a mind per se). If you
extend these efforts another 10 years in computer evolution,
then you have hardware/software combinations that do stuff
"more intelligently", or at least "more skillfully" than humans.
In these instances it is perfectly reasonable to encode into the
program "lick my feet". There will be a big industry designing
& programming sex dolls who look, act, talk, etc. like a former
lover or spouse (or some famous Hollywood star) but to
ultimately be under your control. Just as people will spend
hours advancing through levels of a computer game, or hours
doing a crossword puzzle, or hours figuring out how to manipulate
their boss into giving them a raise, they will spend hours
trying to figure out how to make these machines behave exactly
they way they want them to. Bored with your current model,
just call up HomoRobotics, Inc. and ask them to upgrade the
behavior module to include some new random patterns.
As more R&D goes into this, it will be harder and harder for
you to tell the Robot from the real thing. There are
a host of situations now in which humans "willingly suspend
disbelief". All it takes is one or two cases where the
"artificial" seems more interesting than the "natural" and thats
what you will go with.
> recent development has not amounted to a wholesale rewriting
> of our moral programming.
So true. Witness Rawanda, Kosovo or war in general.
If a situation arises where a group as a whole says it is
ok to throw out the "moral module", then that is what happens.
Proves the point that this stuff really is software.
There has been a lot of stuff in the news lately
discussing those situations when apes turn on other apes.
These are very old behaviors, dictated by survival drives.
> As culture and experience are absorbed by our newly conscious
> minds, these memes interact in a formulaic way with the tons
> of evolutionary garbage which still occupies most of our lobe space.
Actually, the evolutionary garbage is probably a relatively small
fraction of your brain. For example your sex drive mostly arises
in the amygdala (Sci. News 155:406 (6/26/99)). The lobes can
be dedicated to many things; sometimes the amygdala convinces
them to focus most of the thought space to having sex. It
turns out you can control the amygdala "size" by removing
or adding testosterone...
>
> >2) It is immoral to try.
>
It might be immoral to attempt to control another sentient "being"
but I don't think we have a test for "sentience" yet.
I believe that we can build into a robot these things:
(a) goal seeking (= to genetic drives), but instead of the goal
"reproduce & raise children", I substitute the dog drive
"make my master happy". If I'm crazy I substitute a cat drive :-).
(b) complex humanish behaviors (necessary to solve goal seeking problems)
(c) mood swings (depending on how successful the goal seeking is)
(d) observe and copy other patterns (3-5 years of TV soap operas
in my memory banks should cover most of the possibilities :-))
(e) random creativity (necessary when existing goal seeking
strategies don't work) - though this would have to be
constrained in some areas [see below].
(f) self-awareness of the success or failure of my goal-seeking
as well as the ability to pass the mirror test
(g) The ten commandments (or another set of moral codes).
I'm pretty sure that most of this could be done "top down"
though there would probably have to be a lot of "fuzzy" logic.
Now, this is going to be a very intelligent and fairly human-like
machine (it is a really big finite state automata). I'm not
going to have any problem telling it exactly what to do
since it isn't "sentient" in my book.
> To give robots Asimov-type laws would be a planned effort to
> do to them what evolution has done to us - make them capable
> of co-existing with others.
Yep, probably better than we do, since the moral code can
be imprinted so as to be irrevocable.
> When we develop robots that become members of society,
Hold on, who said anything about "members of society".
Now you've jumped the "sentience", "personhood" and
"rights" barrier. A robot is my property, it doesn't
get voting rights.
> If they are to have the capacity of self awareness,
Self-awareness is not "sentience". It's easy to make a
machine aware of its internal state. A good robot could
even do a stack trace and tell you exactly why it selected
a specific behavior in a specific circumstance (something
most humans can't easily do).
> then we recognize that their learning and experience will
> enable them to revise their internal programming just as modern
> humans have.
Before you let a Robot loose on the street, you are going
to have to *prove* that it isn't a threat to anyone.
I suspect that means that if a Robot invents a new behavior
it is going to have to be approved by an oversight committe
as "safe". Perhaps once we start thinking about this
we will discover there are inherently safe creative paths
that the Robot is allowed and those which are potentially
risky that are prohibited.
>
> >The best course of action is simply to start them off with
> >a firm knowledge of human morality, without trying to
> >prevent them from evolving their own (superior) versions
> >as they learn.
>
You are going to have to be careful about this -- if
a Robot decides humans are unreliably moral and robots
can be made reliably moral, then the moral thing is to
correct this problem (and eliminate us...).
> I agree that we can't prevent intelligent beings from evolving
> their own moral deas, not because we shouldn't but just because
> it probably isn't possible.
Aha -- but you shifted from the laws of Robotics to intelligent
*beings* - since it would seem that a moral system (at least the
way I think of them) are designed to protect a person's rights
or "beingness" you haven't said from what their beingness derives.
Did you design the Robot to become "sentient"? [I think you
might get this with -- creativity limits off and self-modification
(especially of the goal seeking rules) on.]
Robert
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:27 MST