From: Dan Clemmensen (Dan@Clemmensen.ShireNet.com)
Date: Sun May 17 1998 - 20:08:06 MDT
GBurch1 wrote:
>
> In a message dated 98-05-17 15:47:32 EDT, Dan Clemmensen wrote:
>
> > We appear to disagree on the boundary between "simple" and "complex".
> > I place a desktop-sized supercomputer factory in the "simple" category.
>
>[...] I place it in the
> "simple" category only IF the designs are available. Perhaps "simple" is
> exactly the right word, but "widespread" is not. It depends on intellectual
> property law and custom. If Intel and the other current manufacturers can
> keep their designs protected, then it might be some while before it becomes
> widespread. I suppose a cadre of "freeware" chip designers could break the
> logjam here, but they'd have to be able to show they designed their chips from
> scratch and didn't incorporate IP owned by the Big Boys. But the PRICE of
> desktop-manufactured computers would definitely drop drastically, with many
> concomitant effects.
>
I begin to see where we differ. Basically, nanotech may be usable in any of several
ways to build computers. I was thinking in terms of rod logic, because IMO
there are no "hard" problems to solve to get to rod logic, other than getting
to nanotech itself. In addition, any competent engineer should be able to
generate a set of basic building blocks: gates, flip-flops, etc. that can
form the basis of a computer design. Now a slight diversion. microprocessor
design involves a hierarchy of design levels. The most abstract level is
the "logical architecture", or "programmer's model" of the sysstem. below this
are levels like RTL and functional descriptions which can be expressed in
labguages like VHDL. Below this, you get to levels that become increasingly
sensistive to the actual physical and electronic properties of the transistors,
other components, and interconnections on the chip itself. The vast majority of
Intel's IP involves these lower layers. Indeed, there are computer archtiectures
(such as MIPS) whose upper layers are in the public domain. Therefore, I feel that
the playing field may be very level here, and Intel's only remaining advantage is
a collection of employees who can rapidly retrain from a microlithographical
orientation to a nanotech orientation. Most of their IP will be worthless.
Basically, the first generation of nanotech supercomputers will not need
the nanotech equivalent of sophisitcated routing tools. We can generate logic
blocks that are easy to design and easy to interconnect, without worrying about
efficient packing or band gap or even heat and power. The first generation can
accept huge inefficiences in power and volume and still be thousands to tens
of thousands of times smaller than silicon-based computers.
>
> > I also believe that there is at least a chance that the availability of
> > nanotech-built massively parallel supercomputers will enable a path to
> > AI, which in turn could permit a nanotech control system that can enable
> > us to build complicated stuff. You appear to think this unlikely?
>
> Not necessarily unlikely, but definitely not certain. I agree that it will
> make HARDWARE cheap and progress in hardware very fast. However, I don't see
> the necessary link to the development of SOFTWARE. That's where the
> disagreement is, I think.
This is beginning to look like an agreement to me. Neither of us either
rejects or predicts that nano-based supercomputing must lead to AI. In this,
I defer to Hans Moravic, who more-or-less believes that progress in AI
directly correlates to the amount of computing power controlled by the
average AI researcher. My point was that hardware does not by itself
create AI software, but there are some brute-force approaches that may
become feasible with truly massive supercomputing. Of sourse, AI (or
my favorite, the collaborative SI) are grand general solutions to
the software problem: just create a superprogrammer. This is not the
only way to use truly massive computing to solve programming problems.
>
> > We also disagree on the rate of adoption of even the "simple stuff"
> > technology. I feel that the rate will be very rapid, while you think
> > it will take a decade. In essence, I feel that after the time that the
> > first macro-scale part (brick, tennis racket handle, whatever) there will
> > be no new orders for new non-nanotech capital equipment, and within a year
> > most simple hard-good parts will be diamondoid.
>
> I strongly disagree. Existing capital equipment won't disappear. To a great
> extent, the existing physical plant at the time of "simple stuff nanotech"
> represents sunk costs. Owners of those goods will continue to use them, but
> at a developing competitive disadvantage. It won't make sense to replace the
> entire manufacturing plant of the earth in a year. Capital equipment life
> cycles WILL shorten and the old tools will be replaced with much more
> efficient ones more quickly than they otherwise would have, due to the
> financial incentive to "get with the program", but the old technology will
> continue to be a drag for ten years, although less so toward the end,
> obviously.
>
Again, we are pretty close. Note that I said no new orders for non-nanotech
capital equipment. I understand that existing capital equipent will continue
to operate until replaced. Essentially, the only constraint on this process
will be the design time for the replacement equipment.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:06 MST