Re: design complexity of assemblers - full email

From: Avatar Polymorph (avatarpolymorph@hotmail.com)
Date: Mon Nov 25 2002 - 18:17:23 MST


Apologies for partial email, full email follows:

Ramez notes:

Making a car through nanotech could be like pruning a tree to control its
shape. I merely note we breed trees and now we genetically restructure them.
When I was a child we had just discovered dna. When my mother was a child
antibiotics weren't around. When her mother was a child the internal
combustion engine was discovered. Yet modern-human alike persons have been
around for at least 5,000 generations, arguable 75,000 generations (if you
count fire, hut-building, tools and talking).

You cannot predict future change in the fashion you are doing. Close
predictive mechanisms such as you describe only arise immediately before the
event. It's like trying to say "we can't predict battleship design in 20
years until we have a blueprint and working model". Generic prediction is
based on rough models over millenia and measuring wide patterns. The
limitations you have been talking about ARE real to be sure and cause
problems NOW but you are not talking chemical impossibilities, just
difficulties... exactly what Drexler admitted. Indeed, this is the standard
objection. We heard similar arguments when mapping the human genome. These
arguments where overcome by increasing computing power and automation.
Similar events are now taking place with proteomics and quantum modelling.

Ramez: "At the current rate of computing power increase, by 2050 we'll
have roughly 10 more orders of magnitude of computing power."

The conservative view of Intel's chairman emiritus in 1997 is for example:

Intel chairman emeritus told an audience at the Intel Developer Forum today
that the industry's ability to shrink a microprocessor through improved
manufacturing processes is going to start butting up against the finite size
of atomic particles. Barring a radical shift in microprocessor science, this
means that the ability of the industry to double the computing power of a
chip every 18 months (known as Moore's Law) may slow.

Moore in fact showed an electromagnetic image of a microprocessor made under
Intel's currently cutting-edge ".25 micron" chip production technology, in
which the individual atomic layers could be counted and identified.

"Some time in the next several years we get to some finite limits, but not
before we get through five generations," Moore said. According to one study,
the physical limitations could be reached by 2017.

"That's well beyond my shift," he quipped. "So someone else can do it."

Other views can be found at : http://www.qubyte.com/
and the ee times

"2002 SCOTTSDALE, Ariz.?It'll be at least a human generation before Moore's
Law begins to run out of gas at around the 9nm and even then it may thrive,
TSMC's chief technologist said Tuesday (March 12).

Calvin Chenming Hu told an audience at the annual Semico Summit conference
here that the 9nm node "can be ready more or less on time, in 2028 according
to long-term forecasts or 2024 according to the 2002 (industry roadmap)."
Transistor and reliability physics allow for 9nm devices, although circuit
and architectural innovations surely will be need to handle anticipated
voltage of roughly 0.5V.

"Fundamental limits are still a ways off," he said."

So you're right about the ten times factor of ten by 2050, which leads to
128,000,000,000 times in 2060 and so on. If ML continues it leads to the old
grain of sand on a chessboard square situation. I guess if that happened ad
infinitum we could map the entire universe on a computer at some point -
when? Tipler did some figures on this. However, this scenario is not (I
think) tens of thousands of years away...

I suspect though from larger measurement patterns that Moore's law is only
part of the Singularity and that it actually speeds up further starting more
obviously around 2017 and slows down progressively after 2025 (roughly).

I don't have a sound scientific background and I'm not a nanotechnologist so
I can't argue on specifics. No one ever said nanotech was easy. But when a
generic argument is put forward - e.g. we can duplicate and better what
nature can do with its nanotech - I think it's a strong argument.
I do have a sneaking suspicion that real barriers on measurement of many
things are not mechanical or scientific (words which I think will lose their
present meaning) but ethical.

Anyhow, we should know more by 2010. At that stage, we will see if average
lifespans have already reached over a century albeit perhaps without full
de-wrinkling (this however is not long to follow even if delayed a year or
two beyond this). Then the 2050 or 2025 or 2100 argument assumes less
significance. By 2015 we may well be considering lifespan extensions to 2 or
3 centuries and then we will have a clearer idea about what is DEFINITELY
achieving WITHIN our lifespans (or not)! That's something I couldn't say in
1985 about 2000!

Towards Ascension
Avatar Polymorph

_________________________________________________________________
The new MSN 8: advanced junk mail protection and 2 months FREE*
http://join.msn.com/?page=features/junkmail



This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:23 MST