Re: Mike Perry's work on self-improving AI

From: Joseph Sterlynne (vxs@mailandnews.com)
Date: Mon Sep 06 1999 - 14:22:30 MDT


> hal@finney.org

>The program was able to work on an abstract system for which it had tools
>to modify it in various ways, and some way of measuring whether the
>modification improved things.

>After two or three iterations the process would hit a limit.

>First, the problem can be seen as a matter of getting stuck on a local
>optimum.

>Something similar can be defined for the self-improving program.
>One way to think of Mike's program's failure is that as the program got
>smarter, it got more complicated.

This appears to be a problem which truly requires serious investigation; it
should predicate any extensive attempts at implementation. While I am not
entirely familiar with what the literature has to say on this issue it
seems (given examples like this) that we do not have a sufficiently rich
understanding of some of the rather basic considerations. Some of this
concern has, I think, been incorporated as a major facet of studies of
complex systems: we don't know why some systems can defeat local optima,
how minimally complex various features must be, or even what the shape of
certain multidimensional solution spaces is. As Eliezer suggested, Perry's
system might in fact be insufficiently rich in certain areas to warrant
this broader concern. Whatever the case, we surely must possess at least a
working model of these elements to surpass what successes Perry enjoyed.

The same problems arise with considering human-level augmentation. A
fruitful place to begin is with an idea familiar to us; underneath most or
all of our hopes for a more developed science of mind is the notion that we
should be able to observe and control all aspects of our mind. So in the
promise of uploading we can see that all of our mind is exposed (as code,
as some other representation). Now, I'm not necessarily challenging the
idea---just exploring it---but many of seem to think that an upload will
have no trouble whatsoever with effecting any desired change, including
substantial upgrades. What guarantee do we have that there is no inherent
problem in a human-class mind perceiving all of itself? There is of course
the question of self-observation; it would certainly help to know how much
control our consciousness really has over the rest of the mind.

None of this excludes the possibility of some other intelligence from
examining the upload and modifying it; but it is definitely a question if
we intend to acquire the self-control that we so desire.



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:03 MST