From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Sep 08 1998 - 13:22:05 MDT
Robin Hanson wrote:
>
> Eliezer S. Yudkowsky writes:
> >You can't draw conclusions from one system to the other. The
> >genes give rise to an algorithm that optimizes itself and then programs
> >the brain according to genetically determined architectures ...
>
> But where *do* you draw your conclusions from, if not by analogy with
> other intelligence growth processes? Saying that "superintelligence is
> nothing like anything we've ever known, so my superfast growth estimates
> are as well founded as any other" would be a very weak argument. Do you
> have any stronger argument?
Basically, "I designed the thing and this is how I think it will work and this
is why." There aren't any self-enhancing intelligences in Nature, and the
behavior produced by self-enhancement is qualitatively distinct. In short,
this is not a time for analogic reasoning. If I had to break down the
arguments into its strongest parts, it would go like this:
Statement: A seed AI trajectory consists of a series of sharp snaps and
bottlenecks. Slow improvements only occur due to human intervention or added power.
Reason: Either each added increment of intelligence yields an increment of
efficiency that can sustain the reaction, or it doesn't. While the seed AI
might be slow enough that we could watch the "snap" in slow motion, either the
going is easy or the going is very hard - the function is inherently
unbalanced, and also this is the behavior exhibited by all current AIs.
After that, it's just guessing where the first bottleneck will be. After
optimization, after the level of the neural-level programmer, and before human
intelligence, have been established in various parts of the argument. Then I
make a guess, like Lenat with EURISKO, that the key is adding new domains; and
then I moreover guess the key ability for _that_ is "architecture".
Certain? No, of course not. If I may be permitted to toot my own horn, the
argument is far too rational to be certain. But I still think it's a step
above "does not"/"does too" debate of superintelligent trajectories. At least
it's based on a mental model you can sink your teeth into.
> We humans have been improving ourselves in a great many ways for a long time.
> By a six year old's definition of intelligence ("she's so smart; look at all
> the things she knows and can do"), we are vastly more intelligent that our
> ancestors of a hundred thousand years ago. Much of that intelligence is
> embodied in our social organization, but even when people try their hardest
> to measure individual intelligence, divorced from social supports, they
> still find that such intelligence has been increasing dramatically with time.
The structure is still a lot different. What you have is humans being
optimized by evolution. "A" being optimized by "B". This is a lot different
than a seed AI, which is "C" being optimized by "C". Even if humans take
control of genetics, "A" being optimized by "B" being optimized by "A" is
still vastly different from "C" being optimized by "C", in terms of trajectory.
> This experience with intelligence growth seems highly relevant to me.
> First, we see that the effect of smarter creatures being better able to
> implement any one improvement is counteracted by the fact that one tries the
> easy big win improvements first. Second, we see that growth is social; it
> is the whole world economy that is improving together, not any one creature
> improving itself. Third, we see that easy big win improvements are very rare;
> growth is mainly due to the accumulation of many small improvements.
> (Similar lessons come from our experience trying to write AI programs.)
With respect to human genetic evolution, I agree fully, but only for the past
50,000 years. On any larger scale, punctuated equilibrium seems to be the
rule; slow stability for eons, then a sudden leap. The rise of the
Cro-Magnons was a very sharp event. A fundamental breakthrough leads to a
series of big wins, after _that_ it's slow optimization until the next big win
opens up a new vista. A series of breakthroughs and bottlenecks.
The history of AI seems to me to consist of a few big wins in a vast wasteland
of useless failures. HEARSAY II, Marr's 2.5D vision, neural nets, Copycat,
EURISKO. Sometimes you have a slow improvement in a particular field when the
principles are right but there just isn't enough computing power - voice
recognition, for example. Otherwise: Breakthroughs and bottlenecks.
> Now it is true that AIs should be able to more easily modify certain
> aspects of their cognitive architectures. But it is also true that human
> economic growth is partly due to slowly accumulating more ways to more
> easily modify aspects of our society and ourselves. The big question is:
> why should we believe that an isolated "seed AI" will find a very long stream
> of easy big win improvements in its cognitive architecture, when this seems
> contrary to our experience with similar intelligence growth processes?
It isn't contrary to our experience. Just the opposite. Oh, it might not
find the Motherlode Breakthrough right away; I fully expect a long period
while we add more computers and fiddle continuously with the code. But once
it does - self-enhancing rise of the Cro-Magnons.
If it's breakthroughs and bottlenecks in cases of *no* positive feedback; and
even smooth functions turn sharp when positive feedback is added; and every
technological aid to intelligence (such as writing or the printing press)
produces sharp decreases in time-scale - well, what godforsaken reason is
there to suppose that the trajectory will be slow and smooth? It goes against
everything I know about complex systems.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:33 MST