From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Sep 27 1998 - 13:46:51 MDT
Robin Hanson wrote:
>
> Let's say that at each moment the AI has a list of optimization ideas to try,
> and some indication of which ideas are more promising. Let's assume per your
> claim that each optimization will either fail completely, or result in a 10%
> improvement. There are then two tasks: evaluate and implement each idea.
>
> As the AI progresses there are contrary forces working for and against
> accelerated growth. As the AI gets more optimized, it is able to implement
> any one optimization idea in a short time. It may also be able to evaluate
> each idea in a shorter time. But working against this is the fact that the
> AI will wisely first work on the easiest most likely to succeed ideas. So
> as time goes on the AI has to evaluate more and more ideas before it comes
> to one that works, and it takes more and more work to implement each idea.
Now _that's_ the kind of argument I wanted to hear! Thanks, Hanson.
[But would you _please_ stop making me an additional recipient? This message
is very late because it accidentally went to Hanson only!]
The entire cycle described above is what I would consider a single "increment"
(O' or I' or P'), if an increment is defined as the total amount achievable
for any three values of (P, O, I). You are dealing with fragmentary
increments, assuming that the AI's time to complete any one task is the
occurrence that happens on our time scale. But I'm thinking in terms of a
series of AIs created by human programmers, and that the entire potential of a
given model of AI will be achieved in a run over the course of a few hours at
the most, or will bog down in a run that would take centuries to complete.
(As I say in another email, because of the relative speeds involved, and
because of the mix of conscious thought and algorithm, exactly human speed is
unlikely.) In either case, the programmer (or more likely the Manhattan
Project) sighs, sits down, tries to fiddle with O or I and add abilities, and
tries running the AI again.
What matters is not the AI's prioritization of its tasks, but the ability of
some of those tasks to increase intelligence rather than merely optimizing.
Pick the AI's task-board for a single level of intelligence, and of course
things proceed easily at first and harder later on. But what matters is not
the level it starts at, but the succession of levels, and when you "zoom out"
to that perspective, the key steps are likely to be changes to the fundamental
architecture, not optimization. If the AI is useful, anyway. Running an
optimizing compiler on an optimizing compiler doesn't go to infinity;
optimization is useless, except insofar as it frees up power for more intelligence.
> So why should AI abilities get better faster than the problems get harder?
> This is not our experience in other analogous areas of learning, such as
> improving computer hardware.
A human society proceeds at a fairly constant pace because it is made up of
such a variety of intelligences. There are a lot of uninspired nobodies and
many competent researches and a few creative minds and a supergenius once
every few generations. All these combine to lend a fair steadiness to the
rate of speed; no breakthrough adds up and up, because the geniuses have only
a limited number of initial insights and eventually pass their prime; no
bottleneck persists forever, because a genius always comes along sooner or
later. There are all matter of negative feedbacks and inertias, ranging from
lack of popular support to conservative oldsters. (I am thinking particularly
of the history of physics.)
But an AI is a single person, and its level of intelligence is unitary. (Oh,
this isn't necessarily true and I know it, but it is more true of an AI than
of a human society.) Imagine if all the physicists in the world had exactly
the same amount of brainpower, if the whole were linked. They would not pass
to Newtonian gravitation until all of them were Newtons, but then they would
remain Newtons during the next few centuries; and we must moreover imagine
that they would be totally unhampered by conservatism, or politics, or
attachment to a particular idea, because we their makers would not have built
it in. And then the society of Newtons would pass onward at whatever rate
they did, and would not proceed to 20th-century physics until Einstein came
along, but then they would all be Einsteins forever after, Einstein in his prime.
I think that the result of this on a small scale would be exactly what Hanson
has described. With each new level of intelligence, there would be an initial
spurt of ideas (of whatever magnitude), then a leveling off. (Just like AM
and EURISKO.) There would be no steady progress, as we have now, because
there would be no diversity.
But returning to seed AIs, if the spurt of progress could reach a new level of
intelligence, one becomes concerned with the whole ladder and not with any
individual step. Each rung on the ladder is clearly labeled with a height the
climber may reach up, the length of ver arms. Now either the climber may go
from bottom to top, or ve will reach an impasse. If each new rung lengthens
the arms over a period of time, quickly at first, and then exponentially
decreasing the increment with time (does Hanson prefer another function?), it
still remains true that if the climber cannot reach the next rung on the first
try, ve is not likely to reach it on the second; if ve cannot reach it on the
fourth, ve is virtually certain not to reach it on the sixteenth, and the
climber shall hang there until a human comes along to give ver a boost.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:37 MST