Re: Singularity: AI Morality

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Dec 15 1998 - 14:04:49 MST


Samael wrote:
>
> 1) One must have a reason to do something before one does it.
> 2) If one has an overarching goal, one would modify one's subgoals to reach
> the overarching goal but would not modify the overarching goal, because one
> would not have a reason to do so.

I suggest that you take a look at
http://tezcat.com/~eliezer/AI_design.temp.html#det_igs
for an explanation of how to build goal systems without initial
("overarching") goals.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:50:01 MST