From: Robin Hanson (hanson@econ.berkeley.edu)
Date: Sun Sep 27 1998 - 13:39:16 MDT
Eliezer S. Yudkowsky writes:
>> As the AI progresses there are contrary forces working for and against
>> accelerated growth. As the AI gets more optimized, it is able to implement
>> any one optimization idea in a shorter time. It may also be able to
>>evaluate
>> each idea in a shorter time. But working against this is the fact that the
>> AI will wisely first work on the easiest most likely to succeed ideas. ..
>
>Now _that's_ the kind of argument I wanted to hear! Thanks, Hanson.
This was the argument I gave in my first comment on Vinge.
>.... You are dealing with fragmentary
>increments, assuming that the AI's time to complete any one task is the
>occurrence that happens on our time scale. But I'm thinking in terms of a
>series of AIs created by human programmers, and that the entire potential of a
>given model of AI will be achieved in a run over the course of a few hours at
>the most, or will bog down in a run that would take centuries to complete.
>... In either case, the programmer (or more likely the Manhattan
>Project) sighs, sits down, tries to fiddle with O or I and add abilities, and
>tries running the AI again. ... But what matters is not
>the level it starts at, but the succession of levels, and when you "zoom out"
>to that perspective, the key steps are likely to be changes to the fundamental
>architecture, not optimization.
The same argument seems to apply at this broader level. The programmer has
a list of ideas for fundamental architecture changes, which vary in how
likely they are to succeed, how big a win they would be if they worked,
and how much trouble they are to implement. The programmer naturally tries
the best ideas first.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:37 MST