From: Robin Hanson (hanson@econ.berkeley.edu)
Date: Mon Sep 28 1998 - 11:16:46 MDT
Eliezer S. Yudkowsky and I seem to be going in circles:
>>> >>> > Mere mention of the work "feedback" is not sufficient ...
>>> >>> >I didn't just "mention" it; I talked about the behavior of the
>>> >>> >sum of the series of I'1 = C(O, P, I), I'2 = C(P, O, I + I'1), ...
>>> >>> Why is a strongly convex C more realistic?
>>> >>If optimization yields a ... big jump ... the AI is likely to redesign
>>> >>itself in a fairly major way ... these large steps keep repeating to
>>> >>superintelligence, or at some point the AI can't redesign ... itself.
>>> >
>>> >As the AI progresses there are contrary forces working for and against
>>> >accelerated growth. ... it is able to implement any one optimization
>>> >idea in a short time. ... but ... the AI will wisely first work
>>> >on the easiest most likely to succeed ideas.
>>>
>>> You are ... assuming that the AI's time to complete any one task is the
>>>occurrence that happens on our time scale. But I'm thinking in terms of a
>>>series of AIs created by human programmers ...
>>
>> The same argument seems to apply at this broader level. ...
>
>Ah, let me clarify: The succession of levels that the AI pushes itself
>through, not the succession of levels that the programmer tries. ...
>Each time the AI's intelligence jumps, it can come up with a new list. If it
>can't come up with a new list, then the intelligence hasn't jumped enough and
>the AI has bottlenecked. The prioritized lists probably behave like you said
>they would. It's the interaction between the lists of thoughts and the
>thinker that, when you zoom out, has the potential to result in explosive growth.
Granted that as the AI progresses, it can add new items to its ideas list, and
can re-evaluate the rankings of list items. But this is also true of many of
the other familiar intelligence growth processes (I gave a long list).
In practice, this rarely has lead to superexponential growth. Of course there
is always a "potential" for it, but do we have any *arguments* suggesting this
should be different?
Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884
140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:37 MST