From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Sep 18 1998 - 14:51:51 MDT
That section of "Staring Into the Singularity" (which Hanson quoted) was
intended as an introduction/illustration of explosive growth and positive
feedback, not a technical argument in favor of it. As an actual scenario,
"Recursive Moore's Law" does not describe a plausible situation, because the
infinite continuation of the eighteen-month doubling time is not justified,
nor is the assumption that AIs have exactly the same abilities and the same
doubling time. Above all, the scenario completely ignores three issues:
Nanotechnology, quantum computing, and increases in intelligence rather than
mere speed. This is what I meant by a "pessimistic projection". Actual
analysis of the trajectory suggests that there are several sharp spikes
(nanotechnology, quantum computing, self-optimization curves), more than
sufficient to disrupt the world in which Moore's Law is grounded.
So what good was the scenario? Like the nanotech/uploading argument, it's a
least-case argument. Would you accept that in a million years, or a billion
years, the world would look to _us_ like it'd gone through a Strong
Singularity? Just in terms of unknowability, not in terms of speed? Well, in
that case, you're saying: "I believe it's possible, but I think it will
happen at a speed I'm comfortable with, one that fits my visualization of the
human progress curve."
The scenario above points out (via the old standby of "It's a billion years of
subjective time!") that once you have AIs that can influence the speed of
progress (or uploaded humans, or neurotech Specialists, or any other
improvement to intelligence or speed) you are no longer dealing with the
_human_ progress curve. More than that, you're dealing with positive
feedback. Once intelligence can be enhanced by technology, the rate of
progress in technology is a function of how far you've already gone. Any
person whose field deals with positive feedback in any shape or form, from
sexual-selection evolutionary biologists to marketers of competing standards,
will tell you that positive feedback vastly speeds things up, and tends to
cause them to swing to "unreasonable" extremes.
Some of the things Hanson challenges me to support/define have already been
defined/supported in the sections of "Human AI to transhuman" which I have
posted to this mailing list - as stated, a "supporting increment" of progress
is one which supports further progress, both in terms of self-optimization
freeing up power for additional optimizing ability, and in terms of new CPU
technologies creating the intelligence to design new CPU technologies. The
rest of the assertions I can defend or define are also in that thread
(including "short time", "rapidly", and "self-sustaining").
But I can't tell you anything about nanotechnology or quantum computing - not
more than the good amateur's grasp we all have. I do not consider myself an
authority on these areas. I concern myself strictly with the achievement of
transhuman or nonhuman intelligence, with the major technical background in
computer programming and a secondary background in cognitive science. I am
assured of the existence of fast infrastructures by Dr. Drexler, who has a
Ph.D. in nanotechnology and understands all the molecular physics. If in turn
Dr. Drexler should happen to worry about our ability to program such fast
computers, I would assure him that such things seem more probable to me than
nanotechnology. I don't know of anyone who Knows It All well enough to
project the full power/intelligence/speed curve, but the technical experts
seem to think that their particular discipline will not fail in its task.
What I can tell you is this: Given an amount of computing power equivalent to
the human brain, I think that I - with the assistance of a Manhattan Project -
could have an intelligence of transhuman technological and scientific
capabilities running on that power inside of five years. (This is not to say
that _only_ I could do such a thing, simply that I am speaking for myself and
of my own knowledge - my projection is not based on a hopeful assessment of
someone else's abilities.) I can also tell you, from my amateur's grasp of
fast infrastructures, that the Singularity would occur somewhere between one
hour and one year later.
Computing power substantially less than that of the brain would probably
double the time to ten years, but I still think I could do it given a
substantial fraction of the current Internet. In other words, given no
technical improvement whatsoever in any field outside my own, humanity's
resources still suffice for a Singularity. We passed the point of no return
in 1996.
Why do I believe the Singularity will happen? Because I, personally, think I
can do it. Again, not necessarily _only_ me, or even _mostly_ me - but I can
speak for myself and of my own knowledge.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:35 MST