Re: Yudkowsky's AI (again)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Mar 25 1999 - 15:39:35 MST


Lee Daniel Crocker wrote:
>
> So does that mean I get to make both of your lists: sane persons
> and dangerous persons? :-)

No, just the "sane" list. If I'm not including myself on the No-Upload
list, I can hardly include you. Neither of us would dream of doing
anything irrevocable without enough superintelligence that we would have
arrived at the same truth no matter what ideas we started out with.
There's no reason to trust us less for having figured this out ahead of
the crowd.

I guess the question everyone else has to ask is whether the possibility
that late-term Powers are sensitive to the initial conditions is
outweighed by the possibility of some first-stage transhuman running
amuck. It's the latter possibility that concerns me with den Otter and
Bryan Moss, or for that matter with the question of whether the seed
Power should be a human or an AI.

-- 
        sentience@pobox.com          Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/singul_arity.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:23 MST