quoth the devourer of cassiterite: this stone is heavy (I)

From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Thu Feb 06 1997 - 16:16:49 MST


Stephen Wolfram, "Cellular Automata and Complexity -- Collected
Papers", Adison Wesley (1994), pp. 309-328. [ Moderately technical,
highly enjoyable. If you haven't heard about CAs yet, you've been
missing a lot of fun -- 'gene ]

Approaches to Complexity Engineering (1986).

Abstract -- Principles for designing complex systems with specified
forms of behaviour are discussed. Multiple scale cellular automata
are suggested as dissipative dynamical systems suitable for tasks
such as pattern recognition. Fundamental aspects of the engineering
of such systems are characterized using computation theory, and some
practical procedures are discussed.

The capabilities of the brain and many other biological systems go
far beyond those of any artificial systems so far constructed by
conventional engineering means. There is however extensive evidence
that at a functional level, the basic components of such complex
natural systems are quite simple, and could for example be emulated
with a variety of technologies. But how a large number of these
components can act together to perform complex tasks is not yet
known. There are probably some rather general principles which
govern such overall behaviour, and allow it to be moulded to
achieve particular goals. If these principles could be found and
applied, they would make new forms of engineering possible. This
paper discusses some of the approaches to such forms of engineering
with complex systems. The emphasis is on general concepts and
analogies. But some of the specific systems described should
nevertheless be amendable to implementation and detailed analysis.

In conventional engineering or computer programming, systems are
built to achieve their goals by following strict plans, which
specify the detailed behaviour of each of their component parts.
Their overall behaviour must always be simple enough that
complete prediction and often also analysis is possible. Thus
for example motion in conventional mechanical engineering devices
is usually constrained simply to be periodic. And in conventional
computer programming, each step consists of a single operation
on a small number of data elements. In both of these cases, much
more complex behaviour could be obtained from the basic components
whethre mechanical or logical, but the principles necessary to
make use of such behaviour are not yet known.

Nature provides many examples of systems whose basic components
are simple, but whose overall behaviour is extremely complex.
Mathematical models such as cellular automata (e.g. [1]) seem
to capture many essential features of such systems, and provide
some understanding of the basic mechanisms by which complexity
is produced for example in turbulent fluid flow. But now one
must use this understanding to design systems whose complex
behaviour can be controlled and directed to particular tasks.
>From complex system science, one must now develop complex systems
engineering.

Complexity in natural systems typically arises from the collective
effect of a very large number of components. It is often essentially
impossible to predict the detailed behaviour of any one particular
component, or in fact the precise behaviour of the complete system.
But the system as a whole may nevertheless show definite overall
behaviour and this behaviour usually has several important features.

Perhaps most important, it is robust, and is typically unaffected
by perturbations or failures of individual components. Thus for
example a change in the detailed initial conditions for a system
usually has little or no effect on the overall outcome of its
evolution (although it may have a large effect on the detailed
behaviour of some individual elements). The visual system in the
brain, for example, can recognize objects even though there are
distortions or imperfections in the input image. Its operation is
also presumably unaffected by the failure of a few neurons. In
sharp contrast, however, typical computer programs require explicit
account to be taken of each possible form of input. In addition,
failure of any one element usually leads to catastrophic failure
of the whole program.

Dissipation [i.e. irreversible mapping, if time is to flow
backwards -- 'gene ], in one of many forms, is a key principle
which lies behind much of the robustness seen in natural systems.
Through dissipation, only a few features in the behaviour of the
system survive with time, and others are damped away. Dissipation
is often used to obtain reliable behaviour in mechanical engineering
systems. Many different initial motions can for example be
dissipated away through viscous damping which brings particular
components to rest. Such behaviour is typically represented by
a differential equation whose solution tends to a fixed point
at large times, independent of its initial conditions. Any
information on the particular initial conditions is thus destroyed
by the irreversible evolution of the system [ the discrete
Hamiltonian defines a field of arrows over the array of state
space hypervoxels, the dissipative (non-Liouville) Hamiltonians
yielding bifurcations, if we take a look over our shoulder;
these are unresolvable unless we also throw self evolution
history into the bargain -- 'gene ].

In more complicated systems, there may be several fixed points,
reached from different sets of initial conditions. This is the case
for an idealized ball rolling on a landscape, with dissipation in
form of friction. Starting at any initial point, the ball is "attracted"
towards one of the local height minima in the landscape, and eventually
comes to rest there. The set of initial positions from which the ball
goes to a particular such fixed point can be considered a "basin of
attraction" for that fixed point. Each basin of attraction is bounded
by a "watershed" which typically lies along a ridge in the landscape.
Dissipation destroys information on detail on initial conditions,
but preserves the knowledge of which basin on attraction they were in.
The evolution of the system can be viewed as dividing its inputs into
various "categories", corresponding to different basins of attraction.
This operation is the essence of many forms of pattern recognition:
despite small changes, one recognizes that a particular input is in
a particular category, or matches a particular pattern. In the example
of a ball rolling on a landscape, the categories correnspond to
different regions of initial positions. Small changes in input
correspond to small changes in initial position.

The state of the system just discussed is given by the continuous
variables representing the position of the ball. More familiar
examples of pattern recognition arise in discrete or digital
systems [ of course every physical system is discrete both in
time and in space, all thanks to the Bekenstein limit -- 'gene ],
such as those used for image processing. An image might be
represented by a 256x256 array of cells, each black or white.
Then a simple image processing or ("image restoration") operation
would be to replace any isolated black cell by a white cell. In
this way certain single cell errors in the images can be removed
(or "damped out"), and classes of images differing just by such
errors can be recognized as equivalent (e.g. [3]). The process
can be considered to have attractors corresponding to the possible
images without such errors. Clearly three are many of these
attractors, each with a particular basin of attraction. But in
contrast to the example with contiguous variables above, there
is no obvious measure of "distance" on the space of images,
which could be used to determine which basin of attraction a
particular image is in. Rather the category of an image is best
determined by explicit application of the image processing
operation.

Lenth n sequences of bits can be considered as corners of an
n-dimensional unit hypercube. The Hamming distance between two
sequences can then be defined as the number of edges of the
hypercube that must be traversed to get from one to the
other, or, equivalently, the total number of bits that differ
between them. It is possible using algebraic methods to devise
transformations with basins of attraction corresponding to
spheres which enclose all points at a Hamming distance of at
most say two bits from a given point [4]. This allows error-
correcting codes to be devised in which definite messages can
be reconstructed even though they may contain say up to two
erroneous bits.

The transformations used in error-correcting codes are specially
constructed to have basins of attraction with very simple
forms. Most dissipative systems, however, yield much more
complicated basins of attraction, which cannot for example
be described by simple scalar quantities such as distances.
The form of these basins of attraction determines what kinds of
perturbations are damped out, and thus what classes of inputs
can be recognized as equivalent.

[ to be continued -- 'gene ]

P.S. Mathematica 3.0 is now available for Linux (special price for
     students). Since Mathematica is a brainchild of the amazing
     S. Wolfram, it does offer a good support to explore the CA
     universes. (Besides, if you are a student/scientist/engineer,
     it can also come quite handy in lots of circumstances).

P.P.S. Have just picked up "Artificial Life" by Steven Levy
       (1992), a moby nifty book. It contains lots of anectdotes
       about the life of the Great Ones, and several ALife
       factoids otherwise hard to come by. Lightly written,
       yet chockful of the peculiar ALife spirit.

P.P.P.S. Has anybody checked out "A New Kind of Science" yet?
         Is it out by now? Is it worth reading?



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:08 MST