Robert J. Bradbury writes:
> I feel though an un-x86ed Merced (perhaps 2003-2004?)
> will be the last processor you need for 90+% of the things we now
> use computers for.
Don't think so, boss. It has been predicted before, several times in fact. It regularly doesn't happen. Why don't we still use PETs, apple ]['s or CP/M boxen? Ah, but you can't do multimedia with 6502/Z80s. Similiarly, you can't do realtime neural DSP with the Merced, no Sir.
> That era chip *would* be the "last processor" until the IRAM
> (Processor-in-Memory) chips come out to break the memory-to-processor
> bottleneck. [After all it is pretty pitiful when 70-80% of your
Yes, watch out for the Playstation 2. The first embedded RAM processor (and a whopping 4 MBytes grain as well) for the mainstream. Consumer warez setting pace for technical excellence, how very wonderful.
> chip is devoted to nothing but cache!] The people at IBM seem to
In fact not only pitiful, but really *insane*. But for whatever strange reason, people still think Silicon Valley is populated by rational Spock-like types instead of mad hatter tinkerers. Low-fat computing isn't exactly popular -- all hail to investment protection.
> have shown this is doable with existing fabs by relaxing some of
> the DRAM constraints a bit. I suspect with the new Hitachi-Cambridge
> stacked 1 transistor/1 capacitor pseudo-static memory cell, that PiM
> chips done in IBM's SiGe process with copper wiring will really crank.
A wafer-integrated CAM implemented with that process, ah, that would be wonderful. Let your compiler crank out VHDL which get translated in virtual circuitry. I doubt the emulation would be much slower than the real thing, and you could always reconfigger the thing dynamicaly whenever the needs change.
> If you combine that Toshiba's "paper-thin chip (130-micron) packaging
> you can stack these really close (though now you are back to the Cray
> problem of removing the heat). Though by then we may have operational
Some of the heat comes from driving I/O to the outside world. Dump motherboards, dump packaging & things suddenly start running faster & cooler.
> reversible-instruction-set chip architectures that run cooler to
> incorporate into the mainstream. The only thing we are missing is a
The reversible stuff is especially suitable for CAMs. It is extremely tedious building a practical reversible CPU, and we're many orders of magnitude away where the effects will start to show. At molecular scale, on the other hand...
> clear demonstration of optical inter-chip wiring to handle the cache
> coherence for those applications (like databases) that require it, or
May caches burn in hell along with their designers. They're unphysical.
> those computing models that require need high inter-CPU bandwidth like
> cellular-automota.
But CAM don't require high inter-CPU bandwidth, that's the chiefest beauty of them. Volume/surface scaling ratio. With current grain sizes you can comfortably run a CAM code on a pile of PCs with wimpy networking matrix.
> Yes, the future is clear and it seems like desktop brain-ops by 2010.
Aw, come on. That's ridiculous. Then why not a bandersnatch, delivered by 2010 sharp.
> The problem will be programming the darn thing with something other
> than bloatware spagetti (which is how this whole thread started out).
Let's go code gardeners. Grow your own app.