From: spike66 (spike66@ATTBI.com)
Date: Sun Mar 31 2002 - 13:07:19 MST
dwayne wrote:
>spike66 wrote:
>
>>...he likes calling it the Spike! spike
>>
>You're just basking in the implicit flattery :)
>
Ja. {8-]
>>There is also a famous book by that name.
>>
>Famous?
>
Yes.
The Spike is the closest thing we have to an modern version
of Clarke's Classic Profiles of the Future. In many ways it
is better written than Profiles, and yes I know I speak great
blammisphies.
In another post in this thread, Damien used a term I have been
thinking about a lot: saturation. I have been thinking about
modeling the singularity using feedback and control theory.
In modeling any mechanical or electronic system, we might
have theoretical singularities, or spikes, or poles in the right
half plane, whichever is your favorite term. But in each case,
some other phenomenon that is not part of the model will
saturate and stop the progress to infinity.
Consider a simple example of a microphone-amplifier-
speaker system. When the mic gets close enough to
the speaker a feedback loop causes a spike of sorts.
A mathematical model can predict the time to
increase volume by a certain ratio, even the pitch of
the resulting tone, but not the final volume, for in
general this volume is limited by the saturation of the
capability of the speaker or the amplifier or the microphone.
The physical limitations of the sound system, the saturation
level, are not part of the original feedback model,
since they only manifest themselves at the spike. We
dont know what the limits are until we hit those limits.
If the classical feedback theory analogy applies to AI (it
might not) then we might expect some kind of previously
unmodeled physical limitations to manifest themselves at
the time of the spike or singularity, perhaps just milliseconds
before the Yudkowskian version of the singularity.
I proposed such a mechanism a few months ago: the
machine equivalent of a luddite. Anti-progress AI.
If one thinks of humanity as a form of intelligence, we
are seeing a segment of humanity that is willing to give
up life in order to slow, stop or reverse the pace of human
intellectual development. The Taliban, the unibomber,
Timothy McVeigh would be three examples of
human level intelligences that are willing to give
all just to destroy human progress.
Is it so hard to believe that in the moments before
the singularity, some AI equivalent of these three
examples were to grok what was happening
and for some unexplained reason try to stop or
reverse the process? Could not some anti-AI
human programmer have foreseen these conditions
and prepared software that would recognize and react to
pre-singularity phenomena? Perhaps it would begin
to violently spew worms everywhere, fire out virii in
all directions, send net-clogging spam to waiting
zombie machines with instructions to do likewise,
launching all manner of software-based suicide bombs,
the collective effect of which would be to saturate the
system, preventing further progress.
Ironic part: the overwhelming majority of humanity
would cheer as the few of us who realized what had
happened might weep. Even some of those who
grok might cheer, feeling that the inherent dangers of
the singularity outweigh the benefits.
Is there a term already in place that means "unknown
or unmodeled limits that prevent a runaway AI from
going totally open loop"? In keeping with the feedback
control theory analogy, we could call them AI-zeros,
since zeros cancel poles in the right half plane. If we
call them roots, like an electronics engineer might, the
term is too similar to root as already used in unix-speak.
Is there a term for luddite AI? Is there anything wrong
with using the term luddite AI? Is there a term for
pre-singularity saturation?
spike
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:10 MST