"Robert J. Bradbury" wrote:
>
> > From: "Chris Fedeli" <fedeli@email.msn.com> wrote:
> >
> > Billy Brown replied:
> >
> > >1) It won't work. You can only use this sort of mind
> > >control on something that is less intelligent than you are.
> >
> Horse puckies (again). If we take Moravec's three examples
> (1) Deep Blue, (2) The math theorem proving program the
> government reseachers have developed, (3) Moravec's student's
> "Car Driving" program, in all 3 cases I would say that have
> software which is beginning to approach human "intelligence"
> (though lacking self-awareness or a mind per se).
If it doesn't have self-awareness or "a mind per se", then it most certainly is far, far, FAR less intelligent than we are. Such a mind wouldn't even be capable of representing Asimov Laws. To quote Eluki bes Shahar: "It has a what-it-does, not a will, and if you break it you don't have a Library that will do what you want. You have a broken chop-logic."
> If you
> extend these efforts another 10 years in computer evolution,
> then you have hardware/software combinations that do stuff
> "more intelligently", or at least "more skillfully" than humans.
> In these instances it is perfectly reasonable to encode into the
> program "lick my feet".
Absolutely. That simply isn't what *any* of us were talking about; we were talking about the idea of Asimov Laws in a program capable of generalized reasoning.
> > recent development has not amounted to a wholesale rewriting
> > of our moral programming.
> > >2) It is immoral to try.
> >
> It might be immoral to attempt to control another sentient "being"
> but I don't think we have a test for "sentience" yet.
Fine. Then don't try in ambiguous cases. If I enslave you and kill you on a whim, pleading that I wasn't entirely sure you were sentient isn't going to make a lot of headway with the court. If you're not sure, it isn't moral. That simple.
> I believe that we can build into a robot these things:
> (a) goal seeking (= to genetic drives), but instead of the goal
> "reproduce & raise children", I substitute the dog drive
> "make my master happy". If I'm crazy I substitute a cat drive :-).
You'd be amazed at how little intelligence is needed before this simple "arbitrary goal" system can start malfunctioning. Generalized propositional logic operating on predicate calculus, heuristics that can operate on abstract goals, and the standard goal structure is enough to create the possibility.
> (b) complex humanish behaviors (necessary to solve goal seeking problems)
> (c) mood swings (depending on how successful the goal seeking is)
> (d) observe and copy other patterns (3-5 years of TV soap operas
> in my memory banks should cover most of the possibilities :-))
> (e) random creativity (necessary when existing goal seeking
> strategies don't work) - though this would have to be
> constrained in some areas [see below].
> (f) self-awareness of the success or failure of my goal-seeking
> as well as the ability to pass the mirror test
Anything this smart is going to run into trouble. As long as it is still pretty dumb and isn't self-altering, you can eliminate most of the trouble. Even so, simple Asimovs won't cut it.
> (g) The ten commandments (or another set of moral codes).
What happens if they contradict each other? If a learned definition of "human" is generalized or made more specific by some heuristic? ...And so on and so on and so on.
> I'm pretty sure that most of this could be done "top down"
> though there would probably have to be a lot of "fuzzy" logic.
You don't need fuzzy logic for Asimov Laws to malfunction, but it sure helps.
> Now, this is going to be a very intelligent and fairly human-like
It's either dumber than toast, or it has full self-awareness, generalized reasoning, and a complete mind. I don't think you can get all the properties you cited using finite state automata.
> machine (it is a really big finite state automata). I'm not
> going to have any problem telling it exactly what to do
> since it isn't "sentient" in my book.
>
> > To give robots Asimov-type laws would be a planned effort to
> > do to them what evolution has done to us - make them capable
> > of co-existing with others.
>
> Yep, probably better than we do, since the moral code can
> be imprinted so as to be irrevocable.
Wrong-o. Nothing that can learn - nothing that has a data repository that can change - can have an irrevocable moral code. *Any* form of change can be enough, if you're not careful. Even perceptions that learn - and you won't get real-world-capable robots without that - obviously create the theoretical possibility of perceptions that warp in such a way that the robot carries out any conceivable sequence of motor actions.
> > When we develop robots that become members of society,
>
> Hold on, who said anything about "members of society".
> Now you've jumped the "sentience", "personhood" and
> "rights" barrier. A robot is my property, it doesn't
> get voting rights.
Any robot capable of social interaction must be capable of generalized reasoning.
> > If they are to have the capacity of self awareness,
>
> Self-awareness is not "sentience". It's easy to make a
> machine aware of its internal state. A good robot could
> even do a stack trace and tell you exactly why it selected
> a specific behavior in a specific circumstance (something
> most humans can't easily do).
No; self-awareness is more complex than simple reflectivity. You need a *self*-model, not just a model of the low-level code. You have to close the loop - in addition to knowing the stack trace, reasoning to the stack trace has to cause it to happen.
> > then we recognize that their learning and experience will
> > enable them to revise their internal programming just as modern
> > humans have.
>
> Before you let a Robot loose on the street, you are going
> to have to *prove* that it isn't a threat to anyone.
...actually, the "robot" Billy Brown and I were thinking of building couldn't be locked up if you tried. Think Fast Burn Transcendence.
> I suspect that means that if a Robot invents a new behavior
> it is going to have to be approved by an oversight committe
> as "safe". Perhaps once we start thinking about this
> we will discover there are inherently safe creative paths
> that the Robot is allowed and those which are potentially
> risky that are prohibited.
Yeah, and while you guys are agonizing about it in your lab, Billy Brown and I will be running a transhuman AI distributed over the 'Net. Technological relativity check; the technology you invoke is sufficient to end life as we know it.
> > >The best course of action is simply to start them off with
> > >a firm knowledge of human morality, without trying to
> > >prevent them from evolving their own (superior) versions
> > >as they learn.
> >
> You are going to have to be careful about this -- if
> a Robot decides humans are unreliably moral and robots
> can be made reliably moral, then the moral thing is to
> correct this problem (and eliminate us...).
We accept that risk. Eventually, either superintelligence comes into existence, or humanity wipes itself out (via grey goo, say). The action with the largest probability of leading to humanity's survival, to within an order of magnitude over all other choices, is creating a superintelligence as fast as possible.
> > I agree that we can't prevent intelligent beings from evolving
> > their own moral deas, not because we shouldn't but just because
> > it probably isn't possible.
>
> Aha -- but you shifted from the laws of Robotics to intelligent
> *beings* - since it would seem that a moral system (at least the
> way I think of them) are designed to protect a person's rights
> or "beingness" you haven't said from what their beingness derives.
> Did you design the Robot to become "sentient"? [I think you
> might get this with -- creativity limits off and self-modification
> (especially of the goal seeking rules) on.]
I'd be amazed if there was any accidental way to create a human-equiv intelligence. To paraphrase Pat Cadigan, that's like using straw and dirty shirts to cause the spontaneous generation of mice.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way