From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Sep 25 1998 - 15:39:49 MDT
Robin Hanson wrote:
>
> Eliezer S. Yudkowsky writes:
> >> Mere mention of the work "feedback" is not sufficient to argue for a sudden
> >> and sustained acceleration in growth rates, which is what you seem to claim.
> >
> >I didn't just "mention" it; I talked about the behavior of the sum of the
> >series of I'1 = C(O, P, I), I'2 = C(P, O, I + I'1), I'3 = C(P, O, I + I'1 +
> >I'2), etc. I don't see any realistic way to get steady progress from this
> >model. Flat, yes, jumps, yes, but not a constant derivative.
>
> You just keep repeating your claim about the behavior of the sum, without
> elaborating why one thing is more "realistic" than another. If C is concave
> in its third argument, you get subexponential growth. If C is convex instead,
> you get superexponential growth (which may still be very slow for a long time).
> And lots of functions are neither concave nor convex. Why is a strongly
> convex C more realistic?
You can't apply the same optimization trick over and over again; that's like
the old joke about compressing Usenet down to one byte with lossless
compression. If optimization yields a small jump, then the next increment of
optimization is likely to be zero, since much the same method is being used.
If optimization yields a big jump, one that translates into a substantial
amount of power freed up for intelligence, the AI is likely to redesign itself
in a fairly major way - from 1.1 to 2.0, or at least 1.0 to 1.1. Major
repartitioning of the computational modules, and whatnot, which in turn is
likely to lead to a large jump in intelligence and optimization.
Now either these large steps keep repeating to superintelligence, or at some
point the AI can't redesign or optimize itself. I don't believe in slow,
steady, improvement. Debugging, yes. But if you're talking about the slow
reworking of code, line by line, you're really talking about a large jump in
slow motion because the AI is slow - if the AI can rework a line of code well
enough to get improvement, without needing to add more intelligence, it's all
part of the same "increment", the same I' or O'. If the partial reworking
adds even more intelligence, then the equation runs even faster.
Final remark: Given the relative computational requirements of consciousness
and algorithmic thinking, and given the Principle of Mediocrity, and given the
relative linear speeds and the relative processing power compared to the human
brain, I would find it to be a remarkable coincidence if a major jump was
slowed down exactly enough that it looked like slow and steady improvement on
the human timescale, and not flat or vertical. It might happen, because the
human programmers could be unable to work on things that happened on other
scales, but it wouldn't happen by coincidence.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:36 MST