From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Feb 18 2002 - 05:16:49 MST
Eugene Leitl wrote:
>
> On Sun, 17 Feb 2002, Eliezer S. Yudkowsky wrote:
>
> > People are asking the same questions they were asking in 1999. I could
> > understand if there'd been progress and yet not progress toward Friendly AI,
> > but this is just stasis. Why?
>
> You realize of course that Friendly AI is your baby. Most people outside
> this circle haven't heard about it, and I personally find Friendliness a
> mythical property. <insert the usual arguments here>
I know. But I would have expected progress somewhere, and I would have
expected that people would find it useful to, e.g., distinguish between
supergoals and subgoals, regardless of beliefs about outcomes, or for that
matter, regardless of beliefs about the nature of interaction between
supergoals and subgoals. I'd expected that at least the *concepts* would be
useful.
I.e.:
John K Clark wrote:
>
> If the AI doesn't feel that its continued existence is intrinsically more desirable
> than its oblivion then the goal hierarchy won't matter
What is the word "intrinsically" doing in this sentence?
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:39 MST