From: J. R. Molloy (jr@shasta.com)
Date: Mon Sep 25 2000 - 11:44:45 MDT
Michael S. Lorrey writes,
> Primarily because it is on a virtual system that is dependent upon us in the
> real world to maintain. Just as I would not let a child loose in the world
> without supervision, I would not let a new AI loose on the net or have
> capabilities in the real world which limited our ability to supervise it. Once
> it has proven its ability and good will, controls may be loosened, but every
> such entity should always have an off switch of some kind, just as humans do.
In further support of your comments, I'd add that the "off switch" for an AI
could actually terminate or delete it. This means that it would be *much* easier
to control unruly AIs than to control human children, since we (unfortunately?)
can't actually terminate recalcitrant or sociopathic (i.e., unfriendly) human
children.
Furthermore, AIs could compete with each other to see who could be the
friendliest, and only the most friendly would be allowed to replicate -- the
rest being discarded. Similar treatment of human child populations might meet
with resistance from their mothers (to say the least).
As a result, we can deal much more severely and harshly with Mind Children than
we can deal with human children. Bottom line: I think we have more to fear from
fiendish human hackers than from Spiritual Machines, Robo sapiens, Artilects, or
Mind Children.
> Really, Zero, do you still beat your parents for being so fearful of your
> tyranny, or are they already buried in the back yard?
Really, Michael, you have a way of summing up a topic with wickedly incisive
sagacious brevity.
Bravo!
--J. R.
"Artists can color the sky red because they know it's blue.
Those of us who aren't artists must color things the way they
really are or people might think we're stupid."
-- Jules Feiffer
[Amara Graps Collection]
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:11 MST