From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Apr 24 2002 - 00:12:47 MDT
spike66 wrote:
>
> At extro5 a bunch of us were discussing various models for
> the singularity, i.e. soft takeoff, hard takeoff, etc. The hardest
> takeoff I know about is the scenario I think favored by
> Eliezer et al, where the last few minutes before the singularity
> the emergent AI writes all the necessary software itself
> without any guidance from humans. This resultant AI is so
> advanced that it is able to upload all sentient lifeforms into
> a simulation so seamlessly that the meat-based intelligences
> do not even realize anything happened. (Hard takeoff fans,
> did I get this about right?)
Any SI worthy of the name *could* do that, but Michael Anissimov is the only
one I know of who was talking about the nonvolitional seamless uploading of
recalcitrant sentients into a Pedestrian mockup to conserve computing
resources - to my way of thinking, this violates Friendliness. I also think
Anissimov may have reconsidered this since then.
> Question please: if the emergent AI simulates humanity at
> the moment of the singularity, then there are a bunch of
> simulated beings who are working to invent AI and who
> would have a strong suspicion that all the elements are in
> place for a singularity to occur. Right? The simulated
> beings would become puzzled if all the elements for a
> runaway AI were in place but the singularity was not
> happening.
Suure AI has stagnated for fifty years. Suuuure.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:38 MST