Re: singularity logic loop

From: Samantha Atkins (samantha@objectent.com)
Date: Wed Apr 24 2002 - 00:38:21 MDT


spike66 wrote:

> At extro5 a bunch of us were discussing various models for
> the singularity, i.e. soft takeoff, hard takeoff, etc. The hardest
> takeoff I know about is the scenario I think favored by
> Eliezer et al, where the last few minutes before the singularity
> the emergent AI writes all the necessary software itself
> without any guidance from humans. This resultant AI is so
> advanced that it is able to upload all sentient lifeforms into
> a simulation so seamlessly that the meat-based intelligences
> do not even realize anything happened. (Hard takeoff fans,
> did I get this about right?)
>
> Question please: if the emergent AI simulates humanity at
> the moment of the singularity, then there are a bunch of
> simulated beings who are working to invent AI and who
> would have a strong suspicion that all the elements are in
> place for a singularity to occur. Right?

I see no reason that the SI would not let these researchers, and
all others who would not be damaged by knowing, in on the truth.

 The simulated
> beings would become puzzled if all the elements for a
> runaway AI were in place but the singularity was not
> happening. Alternatively, if a group of AI researchers
> created an AI, then that AI uploaded and simulated
> humanity, the simulations themselves would still be at
> work trying to create an AI, which the meta-AI would
> then need to simulate. Seems like the sim would need
> to create multiple levels of simulations of itself within itself,
> an apparent contradiction. Or is it?

The seeming paradox is easily resolved by the above simple means.

- samantha



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:38 MST