Re: Mike Perry's work on self-improving AI

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Sep 05 1999 - 12:36:43 MDT


I think you're invoking hypotheses that are far too general and complex
to explain the failure of what was probably a simple program. I would
explain the doctor-patient's failure to Transcend the same way I would
explain EURISKO's failure. You have a limited set of heuristics, and a
limited set of problems those heuristics apply to, and a limited set of
improvements those heuristics can achieve. And when the heuristics have
made those improvements, then the new heuristics might improve faster,
but they can't improve in qualitatively new ways. Looking at Hal
Finney's capsule description, it's obvious that Docpat (to coin a name)
was even less sophisticated than EURISKO, which explains the less
impressive results.

I can't reference the page that would make it all clear, because I
haven't posted it yet. But in my technological timeline, there's a long
difference between a "self-optimizing compiler" - cool as they are - and
a seed AI.

The problem is multiplex, but three of the most important aspects are
circularity, scalability, and generality. Deep Blue was scalable; if
you piled on speed, you got a qualitative change in the kind of thought
that was taking place. With Docpat, piling on speed just got you the
same results faster.

The next aspect is circularity; with Docpat, and even EURISKO, the thing
being optimized was too similar to the thing doing the optimizing, and
both were too simple. The Elisson design partially addresses this by
having the total intelligence optimize a single domdule, the operation
of which is very different in character from the operation of the total intelligence.

And part of it is that the heuristics or algorithms or whatever simply
weren't general enough, which the Elisson design addresses by having
multiple architectural domdules of sufficient richness to form thoughts
about generic processes, just as our own intuitions for causality and
similarity and goal-oriented behavior apply - wrongly or rightly - in
domains from mathematics to automobile repair. (Note that, in contrast
to the classical AI and Standard Social Sciences Model dogma, thinking
about generic things takes tremendous richness and lots of
special-purpose code, not simplicity and generality - you have to have
so many intuitions that you can draw analogies to the components or
behaviors of almost any Turing-computable process.)

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:02 MST