Re: Mike Perry's work on self-improving AI

From: hal@finney.org
Date: Wed Sep 08 1999 - 11:02:59 MDT


Matt Gingell, <mjg223@nyu.edu>, writes:
> I believe this approach is provable optimal in the absence of any knowledge
> about the shape of the search space, but I'm having trouble tracking down the
> reference.
>
> A while back, I wrote an artificial life system which simulates a population of
> organisms driven by neural networks. Unfortunately, I have to report that a herd
> of parallel searches most certainly can get stuck! (In my case, I was dealing
> with populations of around one or two thousand nodes.)

A provably optimal search is not enough, because it may be infinitely
slow. What you need is a method to get past local optima in a reasonable
amount of time. There is no substitute in that case for hard work and
perhaps some luck.

I still think that it is very possible that there may be many barriers
in the self-improvement path to AI. It seems likely to me that such
barriers exist below human intelligence: someone with an IQ of 50 could
not make progress on improving their own program. it is questionable
whether someone of merely human intelligence will be smart enough to know
how to change a human AI into a super-human one, even (or especially,
for this constrains their options!) working one module at a time as
Eliezer suggests. And there is no way whatsoever of knowing whether a
twice-human AI can see how to convert its program into a thrice-human one.

That self improvement can lead to a transcendentally powerful super-AI
seems to be an article of faith unsupported by facts.

Hal



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:04 MST