[Fwd: AI big wins]

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Oct 01 1998 - 19:08:17 MDT


Aargh. I see that this also went only to Hanson.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.

attached mail follows:


Robin Hanson wrote:
>
> >.... You are dealing with fragmentary
> >increments, assuming that the AI's time to complete any one task is the
> >occurrence that happens on our time scale. But I'm thinking in terms of a
> >series of AIs created by human programmers, and that the entire potential of a
> >given model of AI will be achieved in a run over the course of a few hours at
> >the most, or will bog down in a run that would take centuries to complete.
> >... In either case, the programmer (or more likely the Manhattan
> >Project) sighs, sits down, tries to fiddle with O or I and add abilities, and
> >tries running the AI again. ... But what matters is not
> >the level it starts at, but the succession of levels, and when you "zoom out"
> >to that perspective, the key steps are likely to be changes to the fundamental
> >architecture, not optimization.
>
> The same argument seems to apply at this broader level. The programmer has
> a list of ideas for fundamental architecture changes, which vary in how
> likely they are to succeed, how big a win they would be if they worked,
> and how much trouble they are to implement. The programmer naturally tries
> the best ideas first.

Ah, let me clarify: The succession of levels that the AI pushes itself
through, not the succession of levels that the programmer tries. Once again,
you have to look at a case where you're not just dealing with a list of ideas
a single intelligence comes up with - be it AI or human - but a changing
intelligence, and above all a self-altering intelligence. If you don't look
at the self-alteration, you're not likely to discover any positive feedback.

Each time the AI's intelligence jumps, it can come up with a new list. If it
can't come up with a new list, then the intelligence hasn't jumped enough and
the AI has bottlenecked. The prioritized lists probably behave like you said
they would. It's the interaction between the lists of thoughts and the
thinker that, when you zoom out, has the potential to result in explosive growth.

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:37 MST