From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Oct 01 1998 - 19:10:15 MDT
Sigh. Another Hanson-only, apparently.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
attached mail follows:
Robin Hanson wrote:
>
> Eliezer S. Yudkowsky writes:
> >>>Well, my other reason for expecting a breakthrough/bottleneck architecture,
> >>>even if there are no big wins, is that there's positive feedback involved,
> >>...
> >>Let me repeat my call for you to clarify what appears to be a muddled argument.
> >...
> >Sigh. Okay, one more time: The total trajectory is determined by the
> >relation between power (raw teraflops), optimization (the speed and size of
> >code) and intelligence (the ability to do interesting things with code or
> >invent fast-infrastructure technologies).
> >Given constant power, the trajectory at time T is determined by whether the AI
> >can optimize itself enough to get an intelligence boost which further
> >increases the ability at optimization enough for another intelligence boost.
>
> Can I translate you so far as follows?
> Let P = power, O = optimization, I = intelligence.
> For any X, let X' = time derivative of X.
> The AI can work on improving itself, its success given by functions A,B,C.
> If the AI devoted itself to improving P, it would get P' = A(P,O,I), O'=I'=0.
> If the AI devoted itself to improving O, it would get O' = B(P,O,I), P'=I'=0.
> If the AI devoted itself to improving I, it would get I' = C(P,O,I), P'=O'=0.
> (If it devotes fractions a,b,c of its time to improving P,O,I, it presumably
> gets P' = a*A, O' = b*B, I' = c*C.)
Exactly. When P', O', and I' are all zero, when the AI can't redesign its
chips, optimize existing abilities, or design new ones, the trajectory
bottlenecks. I do have one minor dispute with your terminology, however: I
think that you should substitute A' for A. I think that with AI intelligence,
the current capabilities measure absolute limits and not how much
"improvement" one can wreak. An AI of low intelligence might be able to
design very slow and stupid chips - A is nonzero - but be unable to improve on
human levels. In other words, the elegance of the equation is marred by the
artificial initial values. Maybe your terminology is better, since you can
just set A' to zero until the AI is a competent (or transhuman) researcher in
chip technologies, however long that takes... Anyway, I'll continue to use
your terminology.
> >Presumably the sum of this series converges to a finite amount.
>
> Sum? Of what over what? Do you mean that for the AIs choice of a,b,c, that
> P,O, and I converge to some limit as time goes to infinity?
Actually, I was speaking of O and I with constant P, since I think P' is going
to be zero - or the default human speed of doubling every eighteen months -
until the AI winds up on the post-Singularity side of the trajectory. I don't
think they can handle the research. Given sufficient I, I think P goes to
infinity, or as close to it as makes no difference - a.k.a. Singularity.
But for constant P and constant O, as a simplification, I define the sum as
follows: Given I'1 = C(P, O, I), and I'2 = C(P, O, I + I'1), and I'3 = C(P,
O, I + I'1 + I'2), then the total improvement in intelligence is the sum of
the series. (Since the AI does operate in finite steps, I do not say
integral.) Let us assume the basic strategy of increasing I until a
bottleneck occurs, then increasing O until a bottleneck occurs, and
alternating. If both reach bottlenecks simultaneously, and the AI is too dumb
to have a nonzero A, then a triple bottleneck has occurred. As previously
stated, I think that nonzero A lies on the other side of a Singularity, so I
usually assume zero A and constant P. (Is this optimism or pessimism?)
> >If the amount
> >is small, we say the trajectory bottlenecks; if the amount is large, we say a
> >breakthrough has occurred. The key question is whether the intelligence
> >reached is able to build fast-infrastructure nanotechnology and the like, or
> >of exhibiting unambiguously better-than-human abilities in all domains.
>
> I thought you had an argument for why "breakthrough" is plausible, rather than
> just listing it as one of many logical possibilities.
I meant "B and C have to bottleneck eventually, but when they do, will A be
very large?" And my answer was: If I, P, and O are large enough to begin
with. And I further went on to say: If the initial conditions are P=10^13
ops, O=human, I=architectural design, I think that the OI bottleneck will be
P=10^13 ops, O=far transhuman, I=?transhuman, with A(P, O, I) = nanotech.
(?transhuman means somewhere between Wili Wachendon and a Power, but I don't
know where.) Of course, any term with "human" in it is loosely defined, like
humans themsevles.
But if you mean "Why is a sharp jump upwards plausible at any given point?",
my answer is that, for any reasonable function f() of optimization and
intelligence, solving the differential equation y' = f(y) yields a curve which
is either flat or sharp. Either the increases in I and O are self-sustaining,
yielding further increments, or they peter out.
In technology progress with constant intelligence, you have t' = t, which
gives us the exponential growth we all know and love. If intelligence were a
function of technology (i = t), and given that intelligence sets the rate of
exponential technological growth, I think a more realistic model is t' = e^t,
which yields -log(-t), which goes to infinity.
> >... At this point, the key question for me is "How much of _Coding a
> >Transhuman AI_ did you actually read?"
>
> All of it.
I'm impressed. And honored, and glad to know it was readable.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:37 MST