Re: How to help create a singularity.

From: Eugene.Leitl@lrz.uni-muenchen.de
Date: Tue May 01 2001 - 13:15:36 MDT


Aleks Jakulin wrote:
 
> I agree. But progress is usually made with baby steps, not jumps. When the needs
> are demonstrated, the solutions will follow. Hardware is very conservative ---

Hi Aleks. Trouble is, if you ask the average software developer whether
he'll need three order of magnitude faster computers, and he'll look at
you askance. For many of them computers are plenty fast already.

> many have been burned by providing something that was eventually not supported.
> Practically nobody will throw a technology on the market without knowing how it
> will be used. Even fascinating technologies such as FPGA are marginalized,

Yes, the debacles are many, and well documented. Shrinking profit margins
make for highly risk aversive R&D.

> because they're too radical. And in any case, before you implement something in
> hardware, it has to be simulated and demonstrated in software.

The problem is high prototyping costs of modern hardware. It is the
pipeline setup time if you're running a new design through the fab.
Modern foundries can't afford to goof around with outlandish
architectures. They're trying to keep the production capabilities
saturated with cash chips, and gather whatever profits there are
left for the war chest, i.e. legal battles, economical maneuvring
and putting the rest into surefire returns, such as developing
your next line of products, in structure shrinks and touching
finishes on core logic. Nevertheless, judging from the job alerts
of a single foundry (I track Infineon), they're gearing up to do
a lot with embedded RAM designs, and preparing for nonvolatile
memories.

More minimal cores like StrongARM do dominate the deep embedded
market, and they're seeming to be considered as good candidates
for very large clusters due to their better OPS/Watt performance,
as well as handheld (->wearable) market. Google has just doubled
their capacities to include 8 k nodes. At this cluster sizes
footprint, energy consumption and air conditioning costs become
the biggest numbers on the budget, apart from people, of course.

Message passing hardware support is moving into next generation
of consumer processors (AMDs HyperTransport), so things do move.

> An interesting challenge is how to work well with architectures such as CA's.
> Right now it seems that they're used either as a machine code equivalent for
> logic expressed as "traditional" serial code, or for evolved computational
> circuits. It seems to me that a new generation of significantly more

There are two principally different ways of implementing CAs in hardware:
as embedded RAM, or hardwiring the individual cells. Embedded RAM implementations
optimize for cell density, and allow you to use higher dimensions, allowing
you to use e.g. 3d lattices with their shorter average distances, and
more space to lay around the signalling pipework. But they're relatively
slow, because they're updated more or less sequentially. Hardwiring the
individual cells gives you blazing speed, but limits you to the plane.

Hardware CAs are sufficiently lunatic fringe that I would attempt to make
them accept silicon compiler output with as little tweaking as possible.
I.e. use registers, wires, and blocks of logics mapped to the CA grid,
with the benefit of deterministic, quantized signal behaviour.

Of course this is grave substrate underutilization, but it gets your foot
into the door. And once foundries output these wafers, you can introduce
variations (use more or less state per cell, change the rule, change
the neighbourhood) with much less costs, and allow you tailoring for
other paradigms entirely, without not changing the hardware visibly.

> sophisticated tools are required to take advantages of notions like heuristics
> or modules.
[...]
> I generally agree. The main dilemma is how many heuristics should people
> provide -- on one hand there might be genetic algorithms, on the other
> hand-coded "intelligence". My position is somewhere in between, as I believe
> yours too.

I'm actually pretty much at the extreme side, pro using as much GA
messiness as it is humanly possible. The modularization (which is
clearly present in biological systems) should emerge naturally, and
not be human meddling handiwork. It might be funny to submit human
designs into the genetic soup, though, and see what might evolve.

> Minsky had a nice way of saying it, something along these lines: if natural
> evolution of intelligence took 200 million years, common sense and reason might
> take far less. The quest is for the most efficient approach (thus realistically
> feasible), not the simplest theoretically feasible.

I'm all for the practical approach, but the product of skills, time
and hardware at my disposal doesn't allow me to do anything funky.
Perhaps I should wait for these desktop nanolithoprinters I mentioned.
 
> > I hope you like it in there, where you're sitting. I've been there myself,
> > briefly, but thankfully, have gotten better since.
>
> I do not quite understand what you mean here. Could you explain?

Apologies for the flippant comment. I thought you were a strong AI adherent
such as Eliezer. I was referring to the time when I read Elaine Rich,
Hofstadter and Minsky, and all kinds of weird AI journals, and believed
their approach was not sterile. This was about 15 years ago. Then I got
a bad case of cellular automata, complexity, resulting in processing of
lots of dead tree labeled with Fredkin, Toffioli, Holland, Koza, Kauffman,
Wolfram, and their illustrous ilk. I still haven't recovered yet.
I'm looking for other infections material, but so far haven't found
much. Maybe I've become immune, that would be a pity.
 
> > We don't need no compilation//we don't need no flow control.
>
> Even we monkeys have multiple layers of behavior control, some are reflexive and
> fast, others are reflective and flexible.

Absolutely. But we don't have kLoCs inside us, nor sequential threads of control
(funny that consciousness should seem to absurdly sequential to introspection),
and generally do not suffer from privilege violations. I think computers should
be more like monkeys, not the other way round.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:25 MST