From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Tue Dec 03 1996 - 12:40:19 MST
On Tue, 3 Dec 1996, Twirlip of Greymist wrote:
> On Dec 2, 6:33pm, James Rogers wrote:
>
> [...]
> } >Yes but it doesn't have to look like gibberish.
>
> To whom? There are people who can read and possibly write in straight
> hex machine code.
Chuck Moore (of Forth fame) still programs in this way. At iTV they had
to deassemble a large application (a circuit simulator) he wrote entirely
using his crazy octakey (not hexapod) input scheme. This very nontrivial
application (he uses it to develop his i21/F21 chips, since the
commerical simulators produce only digital barfics if let loose upon his
designs) never had any source. (Why do you think his old company did run
by the name "Computer Cowboys"? Yeeeehaw!)
> } usefulness of languages such as C and C++ apparent. Those languages make
> } more sense than the really high-level ones if you understand what is
> } actually going on.
>
> On bad compiler days programming in assembler has its appeal. Start all
> over from the CPU up.
Current RISCs have become finicky as formula1 cars. They either break
the sonic barrier, or, break down. Nothing else in between. Gcc is very
good at producing terrible Alpha AXP code (while running like molasses
during compilation), while handcrafted assembly runs like a fox (orders
over orders) on the same problem/machine. Hang compilers.
I predict radical simplification of the CPUs, down to the scale of few 10
kTransistors (only the core, not the RAM) due to their drastically better
price/performance ratio (especially memory bandwidth) and the obvious need
to go WSI (wafer-scale integration) quite soon.
1) CPU dies must cost in the 1-50 $ range for maspar applications
2) complex dies are dear, both because of design and yield issues
3) bad die yield goes up exponentially with die size
4) WSI needs a relatively high percentile (50%) of viable dies
5) WSI demolishes cutting, packaging, testing in one fell swoop, and
offers small bus geometries and good memory bandwidth since
6) on-die accesses are orders of magnitude faster and burn much less
juice due to absence of signal up/downscaling
Corollaries:
1) CPUs will become _very_ simple, see MISC.( I doubt InTeL will pioneer
this, though they sure will join the party, once it has started. I
don't know what M$ might or might not do...)
2) RAM grains will be small 128...512 kBytes due to yield reasons,
ergo
3) OSses will feature handcrafted nanokernels (10-20 kByte) and
4) need threaded code, requiring a on-die bi-stack architecture
5) Buses will be large (128-1024 bits), and we'll have VLIW (notice how
my old predictions correlate with what you can now read in
the newer Microprocessor Reports).
6) Programming paradigm will be low-overhead asynchronous OOP, requiring
7) redundant hypergrid wiring scheme to catch the dead dies due to WSI
and to offer sufficient on-wafer communication bandwidth
> } easily abstracted. Other applications, such as database or systems
> } development, require very thorough knowledge of data structures, algorithms,
>
> Hmm. Graphical representation of a database of records; highlight a
> field to be the sort index, drag and drop "Quicksort"...
>
> But someone had to write "Quicksort".
For a large class of problems clean algorithmics do not exist (I'm amazed
my MessagePad 130 (btw, have a look at emate300 and the next Newt (2000))
can read my scrawls most of the time, but then they use a large database
with script samples for the noncontiguous script recognition).
Such problems are best solved by WYWIWYG, GA-growing your code/data for
fitness. Such problem solving methodologies, whether digital (a very smart
RAM) or analog (VLSI ANNs, the C. Meade approach) require a dedicated
architecture, being unable to run on von-Neumann (generically used)
monoprocessors worth $0.02. We'll see hybrid architectures before very long.
> } >Everybody (almost) agrees that the GUI is a better way to comunicate with a
> } >computer than a command language. Everybody except system developers. Eh...
>
> Around here, programmers in general, and many people who don't program
> regularly as well, prefer command lines to GUIs, or at least want the
Even a CLI is a GUI. You don't see signal levels (ok, in EM you do, but
that's just another hardware GUI), do you? ;) A GUI you can't customize
is useless.
> option available.
That's the reason why Apple boxes suck (unless they run MkLinux, that
is). (No, Newt does not Linux. ARM is great for Forth, though, and I
wonder whether They (apage, vade retro &c) have messed up him StrongARM?)
> Merry part,
> -xx- Damien R. Sullivan X-) <*> http://www.ugcs.caltech.edu/~phoenix
P.S. Sorry if I appear somewhat incoherent: my only net access is very
expensive, and so I miss a lot of important mail.
> Jesus don't walk on water anymore; his feet leak.
> -- Edward Abbey
>
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:52 MST