how shall the singularity take off?

From: Spike Jones (spike66@attglobal.net)
Date: Thu Dec 27 2001 - 19:40:48 MST


A comment made by Eliezer has been rattling around in my
head for over a week. The phrase was something like ~
"...those in denial of the hard-takeoff model of the singularity..."

While I see that the hard takeoff Singularity is perhaps the most
well-known model, let us look at possible alternatives and even
consider deriving closed form equations, if at all possible,
to describe singularity models.

If we did that, I can imagine a term in the growth equation
which we have no knowledge of, since those terms have
always been zero in the regime where the growth rates of
knowledge are on the order we have seen historically and
today.

For instance, hard takeoff fans: perhaps there an unknown factor
where friendly AI begins to develop its own next generation,
but a sub-AI, not necessarily "unfriendly" but just misguided,
attempts to halt or reverse the growth of AI. Could there be
a machine level equivalent of luddites and reactionary religious
notions? Is such a thing even imaginable? I can imagine it.

We now have clear examples of human level intelligences (humans)
who openly wish to turn back the advance of knowledge, destroy
that which a particularly successful subgroup of humans has created.
How can we be certain that human equivalent (or greater-
than-human equivalent) AI will not somehow get the same
idea? Furthermore, how can we be certain that this anti-
singularity AI would not have much more potential to slow
or reverse the singularity than the current human-level intelligences
have? Would not the net effect would be a soft takeoff singularity,
or even a series of failed Spikelets before The Spike? spike



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:12:52 MST