RE: Hofstadter Symposium

From: Robert Bradbury (bradbury@genebee.msu.su)
Date: Wed Apr 05 2000 - 02:04:31 MDT


On Tue, 4 Apr 2000, Billy Brown wrote:

> Someone (check the archives :-)) wrote:
> > John Koza said that in numerous attempts to have a genetic program
> > learn to model some tiny aspect of human intelligence or perception,
> > [snip] So, a "brain second" is 10^15
> > operations, and this huge number obviously poses a huge barrier to
> > machine intelligence.
>
> Well, not that big. There is an experimental 10^15 FLOPS system under
> construction now (for ~$100M, I think), so it should only take a decade or
> so for that kind of power to trickle down to the ~$1M systems that AI
> researchers could actually get time on.

At the Contact AI Symposium, Minsky pointed out the dozen or so
"schemas" the eye uses to determine distance to objects. The brain
is very general purpose hardware that over time accumulates
a variety of programs to accomplish specific tasks. When you use
general purpose hardware, it is going to be expensive and slow.

The optical neuronal pathway is doing a 1-100 gigaops (depending on who
you read). As the recent announcement (from France?) shows, once you
clearly understand what that hardware is doing, you can put most of
that processing capacity in a chip that was supposed to retail for $6.00.
The trick will be modeling enough of the "hidden" algorithms that the brain
uses, successfully enough that they can be coded in very tight software
and hardware. For example, at some level of approximation the architecture
of computers today (L1, L2, L3 cache, swap space, hard disk, tapes)
have equivalent functionality and greater capacity than human short term
and long term memory. Yet this only costs a few hundreds of $. The cost
of materials for a "brain" is only a few dollars, its the lengthy and
laborious process of training the thing that makes it expensive. Once we
understand that training sufficiently and can make copies of it, intelligence
is going to be very cheap.

Examples I can think of would be OCR and Voice Recognition software.
Right now these are still done in software on general purpose hardware,
but given the volumes, in a few years they will move into hardware and
be very fast and very cheap.

> You
> might be able to evolve small sub-components on more practical systems, but
> putting it all together is a big problem.

Yes, the interesting problem will not be the low level networks that
"run" specific algorithms but the higher level entity that assigns
relative merit to and chooses between the results being presented.
Things like automobile crashes, would be a good example showing that
even humans choose the wrong result from time to time.

Robert



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:27:51 MST