From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Sep 17 1998 - 16:32:25 MDT
Robin Hanson wrote:
>
> Eliezer S. Yudkowsky seems to be the only person here willing to defend
^^^^
I object to the implication that I'm fighting a lone cause. The key word is
_here_. There are many others, not-surprisingly including AIers (like
Moravec), and nanotechnologists (like Drexler), and others who actually deal
in the technology of transcendence, who are also Strong Singularitarians. In
fact, I can't think offhand of a major SI-technologist who's heard of the
Strong Singularity but prefers the Soft.
> "explosive growth," by which I mean sudden very rapid world economic growth.
I don't know or care about "very rapid world economic growth". I think that
specific, concrete things will happen, i.e. the creation of superintelligence
operating at a billion times a human's raw power, because that kind of power
is easily achievable with technologies (quantum computing, nanotech) that can
be envisioned right now, and to which the only theoretical barrier is our
currently inadequate intelligence to solve certain problems (theoretical
physics, molecular engineering, protein folding). I think the trigger will be
transhumans (of neurotech or silicate) who can either solve the problems or
rapidly design the next generation.
You can call that "explosive growth", if you want. You can describe this
however you like, using any analogies you like, and it won't make a difference
whether the "analogies" are to the rise of Cro-Magnons or the introduction of
the printing press. The actual event will not be driven by analogies. It
will be driven by the concrete facts of cognitive science and
fast-infrastructure technology.
> So I'd like to see what his best argument is for this.
Arguments AGAINST a Singularity generally use the following assumptions:
* 20th-century humans live at the maximum attainable rate of change.
* Some given piece of technology (such as nanotechnology) is impossible,
despite it having already been implemented in nature. (All other technologies
with the same capabilities, whether or not the speaker has foreseen them, are
also impossible by association.)
* Some set of extremely high-level properties are absolutely unalterable OR
* Some properties as they exist in 20th-century humans (and as improved over
those obtaining in earlier centuries) are perfectly optimal from an SI perspective.
* The "fact" that SIs are Turing-computable means that humans can understand
them by simulating them. (If a trained immortal chimpanzee simulated your
brain by hand, could he understand your thoughts? And why couldn't Aristotle
use this same argument to predict the 20th century?)
* It is impossible for an intelligence to create its successor. (The Argument
>From Descartes's Silly Theology.)
(The phrase "20th-century humans" is used when the same would not apply to
"19th-century humans".)
Arguments FOR a Singularity usually require some, but not all, of the
following assumptions:
* 20th-century humans do not achieve the same maximum quality of intelligence
possible with unlimited processing power.
* It is possible to duplicate or exceed the processing power of the human
brain using technology, and then to code an AI.
* It is possible to write software which exceeds human intelligence using less
computing power, at least in the domain of computer design or efficient software.
* There exists at least one fast-infrastructure technology, such as
nanotechnology, which is capable of manufacturing massive computer power in a
short time.
* Intelligence above a certain level can rapidly improve itself or design a
successor, using self-sustaining increments of optimization or fast infrastructure.
* It is possible to use neurosurgery or genetic engineering to enhance human
intelligence, either generally or in a specialty.
* It is possible to integrate a human brain with computer processors.
* It is possible to run a human brain, or a community of brains, using faster
processors. (The nanotech/upload "if all else fails" argument.)
(Many of these arguments are technical, rather than philosophical, which is as
it should be.)
The following arguments AGAINST a Strong Singularity have no visible flaws,
and I have no arguments against them.
* Humanity will obliterate itself with nuclear war or nanowar.
* Any SI commits suicide.
* Any SI escapes from the Universe.
* Game theory requires that SIs treat us with kid gloves.
* Unlimited computing power preserves all patterns, including ours.
CONCLUSION: Assuming that none of the above are true, I believe that a Strong
Singularity is the most likely result.
> So first, please make clear which (if any) other intelligence growth
> processes you will accept as relevant analogies. These include the evolution of
> the biosphere, recent world economic growth, human and animal learning, the
> adaptation of corporations to new environments, the growth of cities, the
> domestication of animals, scientific progress, AI research progress, advances
> in computer hardware, or the experience of specific computer learning programs.
None, of course. I wrote "Coding a Transhuman AI"; I don't need to reason
from analogies. There aren't many positive statements I can make thereby, but
I can find definite flaws in other people's simulations. My theory has
advanced enough to disprove, but not to prove - and personally, I find nothing
wrong with that. A scientist doesn't have a Grand Ultimate Theory of
Everything, but he can easily disprove the deranged babblings of crackpots.
It is only the philosophers who attempt the proofs before establishing
disproofs, and come to think of it, it's mainly philosophers who argue by
analogy, starting with the arch-traitor Plato. Some of the things that get
disproved by obvious flaws are all of the analogies listed above.
My theory does not disprove Moravecian leakage, SI suicide, nuclear war, or
nanowar (or the SI obliteration of humanity), and therefore I frame no
hypothesis concerning these events. Years ago I had "proofs" that some such
unpleasant things were impossible, and some of my best theories rose from the
methods I used to tear these treasured proofs to shreds. There are many
unpleasant things for which I have no disproof, but the wonderfully pleasant
vision of a slow, humanly-understandable, material-omnipotence,
chicken-in-every-pot UnSingular scenario is both impossible and unimaginative
and that is that.
When was there ever a point when the large-scale future could be predicted by
analogy with the past? Could Cro-Magnons be predicted by extrapolating from
Neanderthals? (And would the Neanderthals have sagely pointed out that no
matter how intelligent a superneanderthal is, there are still only so many
bananas to pick?) I am not saying that the rise of superintelligence is
analogous to the rise of Cro-Magnons; I am saying that reasoning by analogy is
worthless. The analogies that argue for the Singularity, no less than the
analogies that argue against it. The argument from Moore's Law is specious,
unless you're a researcher in computing techniques who can describe exactly
how a given level is achievable, in which case Moore's Law is the default
assumption for time frames.
> If no analogies are relevant, and so this is all theory driven, can the theory
> be stated concisely? If not, where did you get the theory you use? Does
> anyone else use it, and what can be its empirical support, if analogies are
> irrelevant?
The _hypothesis_ can be stated concisely: Transhuman intelligence will move
through a very fast trajectory from O(5X) human intelligence to nanotechnology
and billions of times human intelligence. The valid reasons for believing
this to be the most probable hypothesis are accessible only through technical
knowledge, as is usually the case.
I can't visualize nanotechnology, but I take Drexler's word that it allows
very high computing power, and I am given to understand that solving a set of
research problems is sufficient to use a standard (current-tech)
DNA-to-protein synthesizer or STM (there's one accessible from the Internet)
to produce the basic replicator. I can visualize cognitive science and AI,
and I say that human intelligence can be improved by adding neurons to search
processes, and that the transcend point of an AI is architectural design, and
both require Manhattan projects but are still within reach.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:35 MST