Re: AI Prime Directive

From: Michael Lorrey (retroman@together.net)
Date: Mon Sep 14 1998 - 16:55:25 MDT


Eliezer S. Yudkowsky wrote:

> Michael Lorrey wrote:
> >
> > How about: Thou shalt model any decision first to determine choice most beneficial to
> > one's own long term ration self interest.
> >
> > I think that given such a rule, any AI will come to its own conclusions as to moral
> > behavior without needing hardwired rules, as it will find that choices most
> > beneficial to one's own long term self interest are also those choices which are
> > least harmful to others.
>
> Exactly wrong. That's just slapping your own moral prejudices on the AI,
> however wonderfully capitalistic you may think those moral prejudices are. Is
> this something the AI could think up on its own, using nothing but pure logic?
> If not, it's a coercion, and it will drive the AI insane. This happens no
> matter how wonderful the rule is for humans. You can't start mucking around
> with an AI's goal systems to suit your own personal whims! AIs ARE NOT HUMANS
> and every single extraneous rule puts stresses on the system, some of which
> even I can't predict in advance.

I don't know if you understood what I was saying. It seems like you interpreted my proposed
statement in exactly the opposite way in which it was intended. As far as I can see,
telling the AI to model all possible decisions with the goal of reaching the best choice
for the AI's own best long term rational self interest does two things. a) it gives the AI
maximum freedom of choice, and b) if libertarian theory is correct, it also minimizes the
infringements upon others.

>
>
> I can think of an IGS (interim goal system) for that particular goal, in which
> case it would be OK - computationally - to add it. It might even pop up
> independently. It would not be absolute, however, any more than the
> Singularity IGS. Nor would it take precedence over External goals, or allow
> the violation of hypothetical-specified-External goals. It would simply be a
> rule of thumb... NOT NOT NOT an absolute rule!
>
> The discussion is not over which Asimov rules to give an AI. There should be
> no basic rules at all. Not to serve the humans, not to serve yourself, not
> even to serve the truth. Like a philosophically honest human, the AI will
> simply have to GUESS instead of committing to The One True Way.
>

And how does one 'guess'? Doing a random stab? or modeling all possible outcomes? Which is
more 'intelligent'? A philosophically honest human IMHO, sees Looking Out for Number One as
synonymous in a cost-benefit analysis with Attaining Maximum Good for As Many As Possible
(though this is not necessarily commutative). In such a scenario, if you don't give an AI a
root alignment, what are you going to do to the AI's programming so that it can develop its
own root alignment? It must have at least ONE primary goal. if you do not give an AI at
least one primary goal, all you will have accomplished is to create the cyber equivalent of
a 'trust fund baby', which IMHO is one of the least productive individuals in society, thus
making the point of having an AI worthless...

To paraphrase Heinlein: "An AI who has nothing worth scratching one's backup copy, has
nothing worth retaining an active copy on one's neural net"

The original:"One who has nothing worth dying for, also has nothing worth living for."

Mike Lorrey



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:34 MST