From: spike66 (spike66@attbi.com)
Date: Tue Apr 23 2002 - 23:39:23 MDT
At extro5 a bunch of us were discussing various models for
the singularity, i.e. soft takeoff, hard takeoff, etc. The hardest
takeoff I know about is the scenario I think favored by
Eliezer et al, where the last few minutes before the singularity
the emergent AI writes all the necessary software itself
without any guidance from humans. This resultant AI is so
advanced that it is able to upload all sentient lifeforms into
a simulation so seamlessly that the meat-based intelligences
do not even realize anything happened. (Hard takeoff fans,
did I get this about right?)
Question please: if the emergent AI simulates humanity at
the moment of the singularity, then there are a bunch of
simulated beings who are working to invent AI and who
would have a strong suspicion that all the elements are in
place for a singularity to occur. Right? The simulated
beings would become puzzled if all the elements for a
runaway AI were in place but the singularity was not
happening. Alternatively, if a group of AI researchers
created an AI, then that AI uploaded and simulated
humanity, the simulations themselves would still be at
work trying to create an AI, which the meta-AI would
then need to simulate. Seems like the sim would need
to create multiple levels of simulations of itself within itself,
an apparent contradiction. Or is it?
Another scenario: the emergent AI realizes that it cannot
create a new level of AI simulation every few minutes, so
it creates a simulated world in which the AI mysteriously
fails to emerge, even tho it appears to have greater than
human intelligence. The AI researchers would be puzzled
at the apparent failure of AI to emerge, even tho all the
elements would appear to be in place. Perhaps they would
then reason that the singularity must have just happened,
and would begin to search for evidence thereof. spike
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:38 MST