From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Wed Jul 07 1999 - 20:58:00 MDT
> Mike Linksvayer <ml@gondwanaland.com> said:
>
> On Wed, Jul 07, 1999 at 03:40:00PM -0700, Robert J. Bradbury wrote:
> > Ever consider that one reason for the delay in the introduction of
> > Merced might be the fact that they believe it may be "the last
> > processor you ever need to buy"?
>
> No. Merced will be lucky to be much faster than the fastest x86 cpu when
> it gets around to shipping. The danger to Intel is definitely not that
> Merced will be so fast nobody will ever want another cpu.
>
I suspect that the marketing people, may have used both
of our arguments as justifications/rationalizations for the delay.
The technical reasons for the delay are more complex I suspect.
I've seen some presentations on the Merced's out-of-order execution
schemes and their compilers (and since I've written compilers, I can
to some degree judge the difficulty of what they are trying to do...).
They are having a difficult time getting the compilers to work (and not
take a week to compile a program). Since what the compilers are able
to do feeds back into the hardware architecture, you can't completely
cast the hardware in stone until the software is done.
I don't want to get into a discussion of whether this is the right
approach (the Alpha for example does much of the work in hardware).
I think in order to provide backward compatability (after all emulating
the stupid x86 architecture is going to take up real-estate that you
can't devote to a Alpha-size cache), they had to make a decision to
move some of the execution-time work done by an architecture like the
Alpha back to compile time. Making that all come together isn't an easy job.
I would argue, that so long as Merced is saddled with an x86 emulation
burden, it will not be a cost-effective competitor to chips without
that handicap. I feel though an un-x86ed Merced (perhaps 2003-2004?)
will be the last processor you need for 90+% of the things we now
use computers for.
That era chip *would* be the "last processor" until the IRAM
(Processor-in-Memory) chips come out to break the memory-to-processor
bottleneck. [After all it is pretty pitiful when 70-80% of your
chip is devoted to nothing but cache!] The people at IBM seem to
have shown this is doable with existing fabs by relaxing some of
the DRAM constraints a bit. I suspect with the new Hitachi-Cambridge
stacked 1 transistor/1 capacitor pseudo-static memory cell, that PiM
chips done in IBM's SiGe process with copper wiring will really crank.
If you combine that Toshiba's "paper-thin chip (130-micron) packaging
you can stack these really close (though now you are back to the Cray
problem of removing the heat). Though by then we may have operational
reversible-instruction-set chip architectures that run cooler to
incorporate into the mainstream. The only thing we are missing is a
clear demonstration of optical inter-chip wiring to handle the cache
coherence for those applications (like databases) that require it, or
those computing models that require need high inter-CPU bandwidth like
cellular-automota.
Yes, the future is clear and it seems like desktop brain-ops by 2010.
The problem will be programming the darn thing with something other
than bloatware spagetti (which is how this whole thread started out).
Robert
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:25 MST