From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Jun 01 2001 - 23:58:22 MDT
Lee Corbin wrote:
>
> Eliezer wrote:
> >
> >Or as I like to put it:
> >
> >"The human goal system is not consistent under reflection."
>
> That *sounds* good. But how is it an advance on pointing
> out simply that the entire "goal system" is inconsistent?
Because inconsistency, i.e. a future having two different desirabilities
depending on how the question is framed, is quite a different thing from
inconsistency under reflection, in which parts of the goal system itself
are regarded as undesirable.
A seed AI, of course, must be consistent under reflection. If not, it
won't stay inconsistent for long. This is one of the strongest design
constraints in Friendly AI.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:54 MST