Re: Yudkowsky's AI (again)

From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Wed Mar 24 1999 - 16:54:51 MST


At 09:09 AM 3/24/99 -0600, you wrote:
>I'm not going to go over this again, mostly because the old section on
>Interim Goal Systems is out of date. I'll just say that the IGS
>actually doesn't make any assumption at all about the observer-relevance
>or observer-irrelevance of goals. The AI simply assumes that there
>exists one option in a choice which is "most correct"; you may add "to
>the AI" if you wish. Even if it doesn't have any goals to start with,
>observer-relevant or otherwise, this assumption is enough information to
>make the choice.

It's enough for the AI to make the choice, but "most correct to the AI" is
not the same as "most correct for me" if subjective meaning is true. And
contrary to your earlier claims, I DO have goals with value if subjective
value is true, and those values may indeed be contrary to the AIs values.

Again, it seems to me that if subjective meaning is true then I can and
should oppose building a seed-AI like yours until I, myself, am some kind
of a power. What's wrong with this argument? It seems like this argument,
if true, annihilates your theory about the interim meaning of life.

-Dan

     -IF THE END DOESN'T JUSTIFY THE MEANS-
               -THEN WHAT DOES-



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:23 MST