[isml] Computers that improve themselves (fwd)

From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Wed Apr 11 2001 - 03:16:47 MDT


______________________________________________________________
ICBMTO : N48 10'07'' E011 33'53'' http://www.lrz.de/~ui22204
57F9CFD3: ED90 0433 EB74 E4A9 537F CFF5 86E7 629B 57F9 CFD3

---------- Forwarded message ----------
Date: Tue, 10 Apr 2001 21:26:24 -0700
From: DS2000 <ds2000@mediaone.net>
Reply-To: isml@yahoogroups.com
To: isml <isml@yahoogroups.com>
Subject: [isml] Computers that improve themselves

>From The News Observer,
http://www.newsobserver.com/monday/business/Story/419010p-414835c.html
-
Published: Monday, April 9, 2001 12:13 a.m. EDT

Paul Gilster
----------------------------------------------------------------------------

----
Computers that improve themselves
At first glance, Darwin's ideas on evolution don't seem to have much to do
with computers. But if a line of computer code doesn't remind you at least
vaguely of a chromosome -- both are essentially stored information -- you
might want to look into the new field of evolvable hardware, where chips
redesign themselves for optimum efficiency. This is evolution with a silicon
flair.
Hot ideas come and go, but I know of no technology more likely to reshape
our relationship with computers than this one.
A computer that evolves may redesign itself in such a way that even its
inventors don't know how it's functioning. They just know that it works
better than ever before, and future generations may work even better.
Something like this has already happened in the laboratory of Adrian
Thompson at the University of Sussex in England. There, at the Center for
Computational Neuroscience and Robotics, Thompson has spent the past four
years working with computer chips that mutate. Chips can manipulate their
own logic gates within nanoseconds, try the new design, and choose the
configurations that work the best.
All of this takes place not in software but hardware. The chips are called
Field Programmable Gate Arrays. The ones Thompson uses come from San Jose
chip-maker Xilinx. The transistors of the chip appear as an array of "logic
cells," which can be changed in value and connected to any other cell on the
fly. By reprogramming a chip's memory, its logic cells can be tuned for any
task at hand.
The work draws on the insights of Hugo de Garis, a computer scientist now
working in Brussels, Belgium, who spent several years building neural
modules -- software units that could be assembled to create artificial
nervous systems.
About that project, de Garis, sounding almost like a biologist, said: "I was
very conscious of the idea of using bit strings as codable mutatable
instructions ('chromosomes') in evolutionary algorithms."
Let's untangle this. An algorithm is a way of getting something done through
computer code, something our PCs do every time we run a program. But an
evolutionary algorithm (also called a "genetic" algorithm) is different. It
generates slight variations to its own code and then puts these changes
through a series of mutations to see what works best. Couple evolutionary
algorithms with an FPGA and amazing things happen.
You can run through thousands of generations quickly with this technology,
saving code that works well, rejecting ideas that don't contribute and
breeding in mutations to keep the mix dynamic. At Sussex, Adrian Thompson
evolved a circuit that could distinguish between two different audio tones.
It took more than 4,000 generations of algorithm evolution and roughly two
weeks of computer time and produced results that were, well, strange.
Thompson's chip was doing its work preternaturally well. But how? Out of 100
logic cells he had assigned to the task, only a third seemed to be critical
to the circuit's work. In other words, the circuit was more efficient by a
huge order of magnitude than a similar circuit designed by humans using
known principles.
And get this: Evolution had left five logic cells unconnected to the rest of
the circuit, in a position where they should not have been able to influence
its workings. Yet if Thompson disconnected them, the circuit failed.
Evidently the chip had evolved a way to use the electromagnetic properties
of a signal in a nearby cell. But the fact is that Thompson doesn't know how
it works.
And that's the weird promise of using computers that evolve. These
algorithms take us into an era where accepted design rules break down, where
components get smaller and the properties of materials are only sketchily
understood. At this level, pushing into the realm of nanotechnology, it may
take evolutionary algorithms to work out their own best practice because we
don't know how to proceed ourselves.
Imagine the philosophical problem this creates. What if you build a critical
system for, say, a nuclear power plant. It works and works well, but you
don't know how to explain it. Can you implement it? Can you rely on it?
If this sounds theoretical, consider that NASA's Langley Research Center has
just announced that it is buying a HAL hypercomputer from Star Bridge
Systems of Midvale, Utah. This computer is no larger than a regular desktop
machine, yet it's roughly 1,000 times faster than traditional commercial
systems because it uses Field Programmable Gate Arrays like those Thomson
used in his work. Surely the name HAL of 2001 fame is no coincidence.
HAL, after all, was the machine that could think almost as well as a person,
certainly well enough to threaten the entire crew he was in charge of. And
though a Star Bridge hypercomputer might not be conscious in any sense we
would recognize, it's able to use an operating system called Viva to
continually reconfigure itself, adapting specifically to deal with computing
situations it's handed.
We're just exploring the possibilities of evolutionary algorithms, but
already applications are apparent in areas such as image recognition, in
which a PC might continually refine its methods of identifying what it sees,
leading to machines that can recognize a human face. And evolvable hardware
means future computers might be able to upgrade their core circuitry simply
by downloading new code.
In Japan, Tetsuya Higuchi and his fellow computer scientists at the
Electro-Technical Laboratory are using genetic algorithms to build analog
circuit components that will go into new cellular telephones. Adaptive
hardware is also being studied at the Jet Propulsion Laboratory in Pasadena,
Calif., to create adaptive sensors for spacecraft. Evolvable computers
aren't yet front-page stuff, but I think they will take us in directions too
potent to ignore.
Paul A. Gilster can be reached at gilster@mindspring.com
--
Dan S
[ISML] Insane Science Mailing List
- To subscribe: http://www.onelist.com/subscribe.cgi/isml
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:06:55 MST