From: James Rogers (jamesr@best.com)
Date: Wed Dec 04 1996 - 10:47:06 MST
>>
>> On bad compiler days programming in assembler has its appeal. Start all
>> over from the CPU up.
>
>Current RISCs have become finicky as formula1 cars. They either break
>the sonic barrier, or, break down. Nothing else in between. Gcc is very
>good at producing terrible Alpha AXP code (while running like molasses
>during compilation), while handcrafted assembly runs like a fox (orders
>over orders) on the same problem/machine. Hang compilers.
In the Alpha AXP, this is primarily a result of mediocre CPU design rather
than something you can blame on the compilers. The Alpha has a very deep
pipeline, but almost no look-ahead or intelligent pre-fetch/decode logic.
The result is that the pipeline is flushed pretty often, which is only
worstened by its depth. Also, the high clock rates make pipeline stalls
(due to things like cache misses) more serious than they would be in slower
clocked chips. Add on top of this an inefficient superscalar
implementation, and you have a chip that virtually *requires* handcrafted
assembly language to run efficiently.
One of the reasons Intel's late generation chips have done so well,
performance-wise, is that Intel probably has one of the best pre-execution
logic designs and techniques on any CPU, RISC or otherwise. Add to this
that Intel easily has one of the most efficient superscalar implementations,
and you get some real performance out of an old architecture. One of the
few RISC companies that I think has a really solid architecture concept is HP.
>I predict radical simplification of the CPUs, down to the scale of few 10
>kTransistors (only the core, not the RAM) due to their drastically better
>price/performance ratio (especially memory bandwidth) and the obvious need
>to go WSI (wafer-scale integration) quite soon.
>
>1) CPU dies must cost in the 1-50 $ range for maspar applications
>2) complex dies are dear, both because of design and yield issues
>3) bad die yield goes up exponentially with die size
>4) WSI needs a relatively high percentile (50%) of viable dies
>5) WSI demolishes cutting, packaging, testing in one fell swoop, and
> offers small bus geometries and good memory bandwidth since
>6) on-die accesses are orders of magnitude faster and burn much less
> juice due to absence of signal up/downscaling
>
>Corollaries:
>
>1) CPUs will become _very_ simple, see MISC.( I doubt InTeL will pioneer
> this, though they sure will join the party, once it has started. I
> don't know what M$ might or might not do...)
Actually, you will probably see arrays of tiny cores on chips glued together
with complex decode logic, or in the VLIW case, have the compiler do most of
the decode for you.
>2) RAM grains will be small 128...512 kBytes due to yield reasons,
> ergo
>3) OSses will feature handcrafted nanokernels (10-20 kByte) and
>4) need threaded code, requiring a on-die bi-stack architecture
>5) Buses will be large (128-1024 bits), and we'll have VLIW (notice how
> my old predictions correlate with what you can now read in
> the newer Microprocessor Reports).
>6) Programming paradigm will be low-overhead asynchronous OOP, requiring
>7) redundant hypergrid wiring scheme to catch the dead dies due to WSI
> and to offer sufficient on-wafer communication bandwidth
>
>> } easily abstracted. Other applications, such as database or systems
>> } development, require very thorough knowledge of data structures,
algorithms,
>>
>> Hmm. Graphical representation of a database of records; highlight a
>> field to be the sort index, drag and drop "Quicksort"...
>>
>> But someone had to write "Quicksort".
>
>For a large class of problems clean algorithmics do not exist (I'm amazed
>my MessagePad 130 (btw, have a look at emate300 and the next Newt (2000))
>can read my scrawls most of the time, but then they use a large database
>with script samples for the noncontiguous script recognition).
>Such problems are best solved by WYWIWYG, GA-growing your code/data for
>fitness. Such problem solving methodologies, whether digital (a very smart
>RAM) or analog (VLSI ANNs, the C. Meade approach) require a dedicated
>architecture, being unable to run on von-Neumann (generically used)
>monoprocessors worth $0.02. We'll see hybrid architectures before very long.
>
>> } >Everybody (almost) agrees that the GUI is a better way to comunicate
with a
>> } >computer than a command language. Everybody except system developers.
Eh...
>>
>> Around here, programmers in general, and many people who don't program
>> regularly as well, prefer command lines to GUIs, or at least want the
>
>Even a CLI is a GUI. You don't see signal levels (ok, in EM you do, but
>that's just another hardware GUI), do you? ;) A GUI you can't customize
>is useless.
>
I'll take it one step further: An OS you can't customize is useless.
-James Rogers
jamesr@best.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:52 MST