From: Emlyn (onetel) (emlyn@one.net.au)
Date: Sun Jul 16 2000 - 04:03:32 MDT
----- Original Message -----
From: Robert J. Bradbury <bradbury@aeiveos.com>
To: <extropians@extropy.com>
Sent: Sunday, July 16, 2000 12:29 AM
Subject: Re: Corporate Uploads Take Over
>
>
> On Sat, 15 Jul 2000, Emlyn (onetel) wrote:
>
> > > I wrote:
> > > >To get the hardware "cheap enough", you need a driver for the
research
> > > >required to quickly advance the curve on what is likely to be highly
> > > >special purpose hardware.
>
> > I'm not sure that I agree that the hardware will need to be all that
special
> > purpose. You're basically looking at massive massively parallel
hardware,
> > which has got to be where hardware is heading in any case.
>
> Not completely true. Programming massive parallelism is not a mainstream
> application. Human minds can't think that way so writing the software
> is very difficult. I would argue massive parallelism will remain a
> special purpose application where it is almost always designed as
> a model or simulation of some real world process (weather, buckytube
> bending, protein folding, etc.). Either that or it will be parallelism
> for doing identical stuff in multi-threads (e.g. web serving).
The kinds of parallel applications which require true parallel
architectures, rather than distributed computing (the latter being where Web
servers, dbms's and the like tend to fall), will become more mainstream I
think. For example, complex sims of real world behaviour are all going to
need parallel architecture. Well, that's not entirely true; you can simulate
parallelism on a sequential machine, but such applications will always run
faster on an appropriately designed parallel machine; and speed is always
the issue.
>
> While general purpose hardware can simulate neural nets, to get to the
> real time or faster time scales that Robin requires, it has to be
> special purpose (at least currently). Ultimately it isn't computation
> the brain is good at, it is evolving clever interconnects. Almost
> *all* current hardware devices (transistors, FETs, etc) and communications
> protocols are not optimal for brain architectures. You need to change
> 1-to-1, 1 to few or 1-to-many strategies into many-to-many. As
> interconnected as the entire WWW is today it is still much less
> than a single human brain.
>
> > Is it more likely that we'll get far faster massively parallel systems
(at
> > cheapish prices) first, or that we'll be able to conquer the problems of
> > high bandwidth connections between human & machine, an area of
technology
> > that could barely be said to have hatched?
>
> We will have special purpose massively parallel systems (I would love
> to know what IBM thinks the market for Blue Gene is). I suspect they
> will not be particularly cheap (they have to sell 200 @ $1 million each
> to recover their investment at a decent profit). We will start with
> low bandwidth human-machine communications (think of the retraining you
> do to scribble on a palm-pilot) that will jump significantly when it
> becomes feasible to implant devices into which neurons can grow
> that are linked to exo-GHz transceivers (think cell-phone or
> Borgian exohardware). With human "eye" "input" and an equivalent
> optic-nerve sized "output", you have the foundation for high bandwidth
> links.
>
> So I see them both developing on separate paths, but the market size
> for the massively parallel hardware is likely to be much smaller than
> the market for high bandwidth exolinks. I can't win at Tetris anymore
> because the keyboard will not respond as fast as I can think. Think
> about the protein folding market vs. the game playing market.
>
> Robert
>
>
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:29:58 MST