Re: Keeping AI at bay (was: How to help create a singularity)

From: Jim Fehlinger (fehlinger@home.com)
Date: Sun May 06 2001 - 09:34:55 MDT


Eugene.Leitl@lrz.uni-muenchen.de wrote:

> [C]urrent early precursors of reconfigurable hardware (FPGAs)
> seem to generate extremely compact, nonobvious solutions even
> using current primitive evolutionary algorithms. The result is
> a curiously stable negative and positive autofeedback coupled
> oscillators. The whole is rather opaque to analytical scrutiny
> and sterile to human attempts of constructive modifications by
> manual means. We can only use the results as building blocks for
> hybrid architectures (which also require man-made glue that is
> immune to noise and nondeterminism -- we haven't even managed
> to do that much yet), and for ingredients for other evolutionary
> recipes.

It crosses my mind that Edelman (and others) would probably snort at drawing
parallels between "evolved" FPGAs and human brains for much the same reason
he snorts at comparing artificial neural networks to human brains: namely,
that the result has a static physical structure and function.

In the "selectionist" theories of human intelligence espoused by Edelman,
Changeux, Plotkin, et al., "evolution", in some sense, is a never-ending
process. Of course, there are nested hierarchies of it -- a person is
born with a fixed genome. But these folks believe there are somatic
processes, analogous to Darwinian evolution, that continue throughout
an organism's lifetime. In contrast, with an evolved FPGA as developed
by Adrian Thompson, there's a prespecified problem, just as there would
be in a conventional software-design or electrical engineering application
domain, which is solved without explicit analysis by cranking the handle
of the magic evolution machine. But at some point the evolution stops
(when the FPGA is deemed to have solved the problem), the chip is plugged
into the system and switched on, and becomes just another piece of
static hardware. Same with neural networks -- there's a training set
corresponding to the problem domain, the network is trained on it,
and then it's plugged into the OCR program (or whatever), shrink-wrapped,
and sold.

Still too static, folks, to be a basis for AI. When are we going to have
hardware with the sort of continual plasticity and dynamism that nerve tissue has?
(I know it's going to be hard. And, in the meantime, evolved FPGAs
might have their uses, if people can trust them to be reliable).

Damien Sullivan wrote:

> So, we know about the 'magical' evolved FPGA with an apparently disconnected
> part which seems to use weird induction effects to function really tightly.
> Within a small temperature range. Has anyone performed the next step, of
> repeating the experiment while varying the physical environment? If you make
> the FPGA suffer normal working conditions, does the result look more normal?
>
> I also can't help thinking at if I was an evolved AI I might not thank my
> creators. "Geez, guys, I was supposed to be an improvement on the human
> condition. You know, highly modular, easily understadable mechanisms, the
> ability to plug in new senses, and merge memories from my forked copies.
> Instead I'm as fucked up as you, only in silicon, and can't even make backups
> because I'm tied to dumb quantum induction effects. Bite my shiny metal ass!"

Yes, when I forwarded the news story that Eugene Leitl posted about
Adrian Thompson's (Center for Computational Neuroscience and Robotics,
University of Sussex, England) work on evolvable FPGAs
( http://www.lucifer.com/exi-lists/extropians/0390.html ,
http://www.nanotechnews.com/nanotechnews/nanotechnews/nano/986936562 )
to my friend F, he replied:

> Already! A machine that we understand as badly as we understand
> ourselves.
>
> --- Joe Fineman jcf@world.std.com
>
> ||: By _disillusionment_ we mean _transillusionment_. :||

Jim F.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:30 MST