From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Sep 07 1998 - 22:24:22 MDT
Extracts from http://pobox.com/~sentience/sing_analysis.html#transition
Quoting Max More: (http://hanson.berkeley.edu/vc.html#more)
"Curiously, the first assumption of an immediate jump from
human-level AI to superhuman intelligence seems not to be a
major hurdle for most people to whom Vinge has presented this
idea. Far more people doubt that human level AI can be
achieved. My own response reverses this: I have no doubt that
human level AI (or computer networked intelligence) will be
achieved at some point. But to move from this immediately to
drastically superintelligent thinkers seems to me doubtful."
This was the best objection raised, since it is a question of human-level
AI and cognitive science, and therefore answerable. While I disagree
with More's thesis on programmatic grounds, there are also technical
arguments in favor. In fact, it was my attempt to answer this question
that gave birth to "Coding A Transhuman AI". (I tried to write down the
properties of a seed AI that affected the answer, and at around 3:00 AM
realized that it should probably be a separate page...)
[Skip to end]
To summarize: First, if a seed AI reaches human equivalence, it has
programming ability considerably beyond what's required to enhance
human-level abilities. Second, there are sharp differences between seed
AI power and human power, seed AI efficiency and neural-programmer
efficiency, and different efficiency/power/intelligence curves for the
species. My estimated result is a bottleneck followed by a sharp snap
upwards, rather than a steady increase; and that the "snap" will occur
short of humanity and pass it rapidly before halting; and that when the
snap halts, it will be at an intelligence level sufficient for rapid
infrastructure.
[Skip back]
The AI is likely to bottleneck at the architectural stage - in fact,
architecture is probably the Transcend Point; once the AI breaks through,
it will go all the way. [...]
Once the seed AI understands its own architecture, it can design new
abilities for itself, dramatically optimize old abilities, spread its
consciousness into the Internet, etc. I therefore expect this to be the
major bottleneck on the road to AI. Understanding program architectures
is the main requirement for rewriting your own program. (Assuming you
have a compiler...) I suppose that the AI could still bottleneck again,
short of human intelligence - having optimized itself but still lacking
the raw computing power for human intelligence.
But if the AI gets up to human equivalence, as Max More readily grants,
it will possess both human consciousness and The AI Advantage.
Human-equivalent intelligence, in the sense of programming all human
abilities into an AI, isn't human equivalent at all. It is considerably
on the other side of transhuman. [...]
Human high-level consciousness and AI rapid algorithmic performance
combine synergetically.
[...]
While the self-enhancing trajectory of a seed AI is complex, there are
surface properties that can be quantitatively related: Intelligence,
efficiency, and power. The interaction between these three properties
determines the trajectory, and that trajectory can bottleneck - quite
possibly exactly at human intelligence levels.
[...]
Power and efficiency determine intelligence; efficiency could even be
defined as a function showing the levels of intelligence achievable at
each level of power, or the level of power necessary to achieve a given
level of intelligence. Efficiency in turn is related in a non-obvious
but monotonically increasing way to intelligence - more intelligence
makes it possible for the AI to better optimize its own code.
[...]
Two interesting points immediately arise. First, the Transcend Point
almost certainly requires a basic minimum of power. In fact, the amount
of raw power may exert as much influence on the trajectory as all the
complexities of architecture. While a full-fledged Power might be able
to write a Singularity-capable program that ran on a Mac Plus, it is
improbable that any human or seed AI could do so. The same may apply to
other levels of power, and nobody knows how. Tweaking the level of power
might enable a bottleneck to be imposed almost anywhere, except for a few
sharp slopes of non-creative self-optimization. The right level of
limited power might even create an actual transhuman bottleneck, at least
until technology advanced... although the transhuman might be very slow
(Mailman), or a huge increase in power might be required for any further
advancement. Or there might be sharp and absolute limits to
intelligence. (I must say that the last two possibilities strike me as
unlikely; while I have no way of peering into the transhuman
trajectories, I still see no arguments in support of either.)
We now come to Max More's point. It so happens that all humans operate,
by and large, at pretty much the same level of intelligence. While our
level could be coincidental, it could also represent the location of a
universal bottleneck. If one is to guess where AIs will come to a sudden
halt, one could do worse than to guess "the same place as all the other
sentients".
[...]
In short, the brain doesn't self-enhance, only self-optimize a prehuman
subsystem. You can't draw conclusions from one system to the other. The
genes give rise to an algorithm that optimizes itself and then programs
the brain according to genetically determined architectures - this
multi-stage series not only isn't self-enhancement, it isn't even
circular.
[...]
The point is - how much raw power does it take to create a seed AI?
(This is the converse of the usual skepticism, where we allow that
Moore's Law gives us all the power we want and question whether anyone
knows what to do with it.) It could take a hundred times the power of
the human brain, just to create a crude and almost unconscious version!
We don't know how the neural-level programmer works, and we don't know
the genetically programmed architecture, so our crude and awkward
imitation might consume 10% of the entire worldwide Internet twenty years
from now, plus a penalty for badly distributed programming, and still run
at glacial speeds. The flip side of that inefficiency is that once such
a being reaches the Transcend Point, it will "go all the way" easily
enough - it has a hundred times human power at its disposal. Once it
reaches neural-programmer efficiencies, its old intelligence only
occupies 1% of the power available to it - and by the time the newly
available power has been used up, it has probably reached a new level of
efficiency and freed up more power, and also gained the ability to create
nanotechnological rapid infrastructure.
If, on the other hand, human programmers are more efficient than the
neural-level optimizer, then the seed AI might have human-equivalent
ability on a tenth of the power - perhaps running on the 'Net today, or
on a single supercomputer in twenty years. And by "human-equivalent" I
do not mean the way in which I originally interpreted Max More's
statement, "full human consciousness plus The AI Advantage". I mean
"partial human consciousness, which when added to The AI Advantage,
yields human-equivalent ability". Such a seed AI wouldn't have access to
additional power, and it might not reach any higher efficiencies than
that of its creators, so its intelligence might remain constant at the
human level. If the intelligence/efficiency/power relation is exactly
right, the seed AI could remain unflowering and unTranscendent for years,
through two or three additional doublings of power. It will, however,
break through eventually. I think ten years is the upper limit.
To summarize: First, if a seed AI reaches human equivalence, it has
programming ability considerably beyond what's required to enhance
human-level abilities. Second, there are sharp differences between seed
AI power and human power, seed AI efficiency and neural-programmer
efficiency, and different efficiency/power/intelligence curves for the
species. My estimated result is a bottleneck followed by a sharp snap
upwards, rather than a steady increase; and that the "snap" will occur
short of humanity and pass it rapidly before halting; and that when the
snap halts, it will be at an intelligence level sufficient for rapid
infrastructure.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:33 MST