From: J. R. Molloy (jr@shasta.com)
Date: Thu Dec 27 2001 - 23:15:28 MST
From: "Spike Jones" <spike66@attglobal.net>
> While I see that the hard takeoff Singularity is perhaps the most
> well-known model, let us look at possible alternatives and even
> consider deriving closed form equations, if at all possible,
> to describe singularity models.
As Kim Cosmos has eloquently observed, "When you can buy intelligence,
alliances are more important than ability." As you know, there are some very
rich nerds in your neighborhood, Spike, and they will have the first chance to
decide how the AI phase transition shall proceed. If they decide to go with
Bill Joy's relinquishment scenario, then expect more movement toward a police
state and martial law (or perhaps universal robotic law).
> If we did that, I can imagine a term in the growth equation
> which we have no knowledge of, since those terms have
> always been zero in the regime where the growth rates of
> knowledge are on the order we have seen historically and
> today.
Have you noticed the growth rate of litigation in the US?
It may out distance technological innovation, smothering it in the process.
Then again, the law may simply appropriate AI and make the phase transition a
matter of jurisdiction. How about lawyer-bots running everything?
No, wait! That may have happened already.
> For instance, hard takeoff fans: perhaps there an unknown factor
> where friendly AI begins to develop its own next generation,
> but a sub-AI, not necessarily "unfriendly" but just misguided,
> attempts to halt or reverse the growth of AI. Could there be
> a machine level equivalent of luddites and reactionary religious
> notions? Is such a thing even imaginable? I can imagine it.
I think the idea of "friendly" AI is perhaps the greatest impediment to
actualizing AI phase transition. Machine intelligence outperforms human
intelligence simply by transcending irrelevant notions and focussing on
objective facts. As AI systems become more and more complex and adaptive, I
think they surpass our ability to imagine the trememdous solutions that they
can create, and the marvelous answers they can provide. To imagine that
machines could evolve into luddite-religious-reactionaries, it is first
necessary to imagine that machines could be "friendly" or malevolent. If we
can avoid making machines "friendly" then we can successfully avoid making
them luddite-religious-reactionaries.
> We now have clear examples of human level intelligences (humans)
> who openly wish to turn back the advance of knowledge, destroy
> that which a particularly successful subgroup of humans has created.
> How can we be certain that human equivalent (or greater-
> than-human equivalent) AI will not somehow get the same
> idea?
It is the desire to be certain that motivates much of the wish to turn back
the advance of knowledge, because the more we know, the more we become
uncertain. So, we can't be certain what human-competitive AI will create.
Similarly, we can't be certain that the next generation of children will not
get infected with the turn-back-the-advance-of-knowledge meme. So, the bottom
line is that we know humans can become luddite-religious-reactionaries, and we
do *not* know that machines can become so meme-infected. Therefore, machines
seem like a better bet.
> Furthermore, how can we be certain that this anti-
> singularity AI would not have much more potential to slow
> or reverse the singularity than the current human-level intelligences
> have?
Or, how do we know that anti-singularity humans would not use human-level AI
to slow or reverse the singularity? We already have evidence, as you've
pointed out, that humans (such as Bill Joy) would like to relinquish the
science that can lead to singluarity. But we don't know if
anti-singularitarians could create anti-singularity AIs. Once again, the safe
bet is with the machines, since we don't know if they could become
anti-singularity, but we do know that humans can become so.
> Would not the net effect would be a soft takeoff singularity,
> or even a series of failed Spikelets before The Spike?
The net effect would be hard takeoff or no takeoff.
(and don't forget to upload your enteric nervous system)
--- --- --- --- ---
Useless hypotheses, etc.:
consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism, GAC, Cyc, Eliza, cryonics, individual
uniqueness, ego, human values, scientific relinquishment, malevolent AI,
non-sensory experience, SETI
We move into a better future in proportion as science displaces superstition.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:12:53 MST