From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Feb 26 1999 - 00:52:13 MST
If you think goals are arbitrary, you'll just graft them on. You have
to think of goals as a function of the entire organic reasoning system.
To hell with whether the Ultimate Truth is that goals are arbitrary or
objective or frotzed applesauce. If you want to design an AI so that
goals stick around long enough to matter, you'd better not walk around
thinking of goals as arbitrary.
You have to justify the goals, thus distributing them through the entire
knowledge base. You have to reduce the goals to cognitive components.
You have to avoid special cases. You have to make the goals declarative
thoughts rather than pieces of procedural code.
Whack an arbitrary goal and it falls apart. Whack an integrated goal
and it regenerates. This has nothing to do with morality; it's just
pure systems logic.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:09 MST