Re: Yudkowsky's AI (again)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Mar 24 1999 - 08:09:02 MST


Thanks for changing the subject line!

I'm not going to go over this again, mostly because the old section on
Interim Goal Systems is out of date. I'll just say that the IGS
actually doesn't make any assumption at all about the observer-relevance
or observer-irrelevance of goals. The AI simply assumes that there
exists one option in a choice which is "most correct"; you may add "to
the AI" if you wish. Even if it doesn't have any goals to start with,
observer-relevant or otherwise, this assumption is enough information to
make the choice.

In short, I have decoupled Interim Goal Systems from the concept of
objective morality. The Prime Directive now states: "Maintain choices
as the holistic function of reasoning." That is, goals should be
distributed through the knowledge base instead of concentrated in a few
syntactic tokens. That way they reassemble themselves if the
architecture changes, and also there are major problems with trying to
maintain coercions which are imposed as special cases.

-- 
        sentience@pobox.com          Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/singul_arity.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:22 MST