From: Matt Gingell (mjg223@nyu.edu)
Date: Tue Sep 07 1999 - 08:00:37 MDT
From: Joseph Sterlynne <vxs@mailandnews.com>
>This appears to be a problem which truly requires serious investigation; it
>should predicate any extensive attempts at implementation. While I am not
>entirely familiar with what the literature has to say on this issue it
>seems (given examples like this) that we do not have a sufficiently rich
>understanding of some of the rather basic considerations. Some of this
>concern has, I think, been incorporated as a major facet of studies of
>complex systems: we don't know why some systems can defeat local optima,
>how minimally complex various features must be, or even what the shape of
>certain multidimensional solution spaces is. As Eliezer suggested, Perry's
>system might in fact be insufficiently rich in certain areas to warrant
>this broader concern. Whatever the case, we surely must possess at least a
>working model of these elements to surpass what successes Perry enjoyed.
I don’t think there can be any general solution to the problem of local optima.
There are lots of different useful techniques, but any hill-climbing algorithm
can potentially get stuck. It’s a question of the topology of the search space,
which can be arbitrarily complex
Perceptrons are interesting because, for functions they can describe, the error
function has no local minima. Unfortunately this is a feature of the narrow
range of things a perceptron can do, rather than because they do anything
particularly clever.
If you know something about a particular search surface, then you can invent
heuristics that take advantage of that knowledge instead of just wandering in
the direction that looks most immediately promising. You can try to infer useful
heuristics from your previous adventures, which itself most likely reduces to a
hill-climbing search of a hypothesis space. Then you end up with layers of
meta-heuristics, so you try to close the loop by integrating your heuristics
into a single problem domain. For instance, organisms in a genetic search might
contain a block of data representing a mutation function, which you hope will be
optimized along with the fitness function.
I think it’s interesting to ask why evolution didn’t get stuck, and whether free
market economics can be modeled as a hill-climb optimizing wealth and, if so,
are there local optima?
-matt
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:03 MST