On Wed, 2 May 2001, Anders Sandberg wrote:
> No problems with that. But I think some of the macros you use are
> non-trivial and in a discussion like this they ought to be on the
> visible level.
You're no fun to argue with. If you instantly spot even the subtler
subterfuges, how am I supposed to weasel myself out of half-baked
arguments?
> People are very aware about the problem, IMHO. It is just that so far
Most IT people I know are pretty much into heavy self-delusion, and
usually interpret you criticizing the methods they adhere to as a personal
attack. I think the field could use a liberal sprinkling of professional
humility.
> many of the solutions have not panned out, making people rather
> risk-aversive when it comes to new approaches. But given how quickly
Of course it's the economy (whether the grant-determining paper output, or
products delivered on schedule, i.e. sufficient quality before the
competitors') which causes people to become risk-averse. Industry does no
longer seem to be able to afford bluesky type of research. Modern variants
of Xerox PARC are quite scarce. Our only hope is smart mecenate.
> people also hang on to new computing trends when they become
> fashionable and have enough mass, I don't think a method of better
You mention the correct word: "fashion". Contrary to the claims the field
is not nearly as rational as claimed.
> software efficiency would be ignored if it could demonstrate a
> measurable improvement.
It is difficult to warrant considerable investment with a ROI latency of
decades, particularly at this day and age. People flock quickly to a new
field curently emerged as hot, and desert it as quickly as it fails to
deliver on (notoriously overhyped) promises. Some fields stick around long
enough to even see several waves of above behaviour.
> Evolutionary algorithms are great for specialised modules, but lousy
> at less well defined problems or when the complexity of the problem
> makes the evolutionary search space too nasty. I don't think we will
I'm arguing that we don't have real evolutionary algorithms. First, we
have toy population sizes, and not enough generations, because our
hardware is so very lousy. Noise generation is easy, but the fitness
function usually takes its sweet time to evaluate. Mapping bottleneck
stages to reconfigurable hardware should somewhat ameliorate that.
More importantly, the investigators seem to somehow assume that
evolutionary algorithms are simple. They're that only superficially. Imo,
biology has only been able to accomplish what it did because it uses
something else. During long rounds of coevolution both the substrate and
the mutation function have changed in mutual adaptation, creating the
usual benign form of the mapping of sequence to fitness space: long
neutral-fitness filaments, allowing mutants to percolate through wide
areas of sequence space at very little costs, and maximum diversity in a
small ball once you leave the filaments.
So what happens, is that investigators pick up a framework they deem nifty
(vogues+personal bias), hard-code a mutation function, let it run on
trivial population sizes for a few rounds and then complain the approach
doesn't scale not much beyond the size of trivial.
If you pick a few ad hoc numbers from the parameter space, you almost
certainly will wind up with a suboptimal, nonadaptive system. The
substrate does not show above properties, and the mutation function is
held fixed, so it can't adapt itself by learning the limitations of the
substrate (ideally, you can't leave all the work just to the mutation
function, but coevolve the substrate as well -- reconfigurable hardware
gives us the most basic capability for the first time), and understand the
emerged code that maps higher-order genomic elements into shape features.
The mutation rate over genome is not constant. The genome uses an evolved
modular representation, where there's e.g. a morphological code, which
allows you to shift a limb from one place of the body to another, without
turning it into a random mass of tissue. There are probably
adaptive-response metalibraries stored in the biological genome, adaptive
mutation rate being the simplest example.
I have a strong feeling we're stuck in what is essentially a weakly
biased random search in gene space, and performance will as long scale
poorly as long as we don't realize that something more intelligent
is going on in there.
To use evolution to produce solutions, we must first learn to evolve.
There might be several phase transitions along the way, in which the
system uses more and more advanced coding mechanisms (which it will
hopefully discover, given enough resources to search the space), and
becomes better and better at the task.
Now this was pure speculation, but I'm missing investigations to check
whether there is something hidden operating there. We're still proceeding
on doing minor variations of the scheme produced by the few seminal works,
and are slowly getting discouraged by the fact that the systems seem to
remain stuck at the digital analogon of autocatalytic sets. Evolutionary
algorithms have been having a deteriorating reputation for a while now.
Sounds familiar?
> get a dramatic jump in software abilities through stringing together
> efficient small modules since as you say the glueing is the hard part
> and not easy to evolve itself. On the other hand, it seems to be a
I was actually thinking about procedural glue, which lumps interface
vectors, to catch the noise. You'll need some fancy ultrawide ALUs
to compute that efficiently.
> way to help improve software and hardware a bit by making the hardware
> adaptable to the software, which is after all a nice thing.
>
> My experience with evolving mutation functions and fitness landscapes
Can you describe us the state of the art in making mutation function a
part of the population?
> suggest that this is a very hard problem. Computing cycles help, but I
> am not that optimistic about positive feedback scenarios. The problem
> is likely that the mutation function is problem-dependent, and for
Of course.
> arbitrary, ill-defined and complex problems there are no good
> muctation functions. Life only had to solve the problem of adapting to
The first ones will perform poorly, surely.
> staying alive, the rest was largely random experimentation (with
> sudden bangs like Cambrium when occasional tricks became available).
I'm very interested in the shape of these tricks. I cannot currently think
of any easy solutions as to flush out these ever-elusive higher order
mechanisms from the sea of flat numbers constituting the genome.
> As for taking over computational resources, that implies that
> intelligence is just a question of sufficient resrources and not much
> of an algorithmic problem. But so far all evidence seem to point
The rate of discovery of new algorithms is certainly powered by the search
resources available to us. It is both the question of smarts (hitting the
right needle in the stack of the initial parameter space in a rather
bruteforceish, but positive-autofeedbacking manner), and to have enough
firepower to keep going, puncturing subsequent kinetic bareers along the
way.
It is a demanding mental work, to keep zooming out from the problem so
that you can become increasibly flexible as you eliminate hidden crippling
built-in assumptions, of course making the search space larger, and thus
needing these rather absurd hardware requirements I mentioned.
It is certainly hard to get going. There seems to be a bareer in there,
which might take several decades of hard work to pierce. It looks like
bootstrap a lot.
> towards algorithms being very important; having more resources speed
> up research, but I have not seen any evidence (yes, not even Moravec's
> _Robot_) that suggest that if we could just use a lot of computing
> power we would get smarter behavior. Besides, hostile takeovers of net
I'm sorry if I came over as a brute force magalomaniac. You can keep
cooking random bitsoup in circumstellar computronium clouds until the
galactic cows come home, and not producing any solutions if the initial
set of parameters is not right.
> resources are rather uncertain operations, highly dependent on the
> security culture at the time, which is hard to predict.
Code reviews do help, but we're trapped within the constraints of the
system: very low variation across population, not smooth degradation but
sudden failure (imagine a dog having a sudden coredump if it runs over a
certain tile pattern while the sun illuminating it in a very specific
angle, and doing perfectly fine under any other conditions).
> > This is a very favourable scenario, especially if it occurs
> > relatively early, because it highly hardens the network layer
> > against future perversion attempts by virtue of establishing
> > a baseline diversity and response adaptiveness.
>
> Sounds like a good defense in court :-)
I'm rooting for the (hopefully coming) Microsoft Net worm, of orders
magnitude bigger proportions than Morris' one. You don't even need true
polymorphism, just enough variation to escape pattern-matcher vaccines,
and library of canned exploits, preferably few of them undocumented ones.
It would wake up people as to the damage potential, and would readjust the
attitude in regards to holistic system security.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:02 MDT