From: Ramez Naam (mez@apexnano.com)
Date: Mon Nov 25 2002 - 11:10:15 MST
From: Avatar Polymorph [mailto:avatarpolymorph@hotmail.com]
> Most arguments about imminent timeframes for Drexlerian
> self-reproducing nanotech relate to computing power.
> Noticeably, thinkers such as Drexler, Tipler and Moravec
> tend to have convergencies in their figures because of
> this.
>
> [primer on the singularity snipped]
>
> Or again: out of all possible software design, the design for
> 5 basic cars and trucks can be placed on-line in a few years
> and be available for billenia.
Let me speak a bit more clearly. It is not at all clear to me that by
2050 we will have sufficient computing power on this planet to have
designed an assembler that can build a car from the ground up. I'm
not saying it's impossible, just that there are good reasons to doubt
it, and that no one that I know of has made a good mathematical case
for why it should be possible.
At the current rate of computing power increase, by 2050 we'll have
roughly 10 more orders of magnitude of computing power.
On the flipside, highly accurate modeling methods for atomic phenomena
(the ones I would absolutely want to use to test an assembler design
before loosing it on the world) scale at something like N^7, where N
is the number of electrons. I'm referring here to techniques like
CCSD. Today we can use such techniques to model systems of perhaps a
few dozen electrons over a picosecond after using a month of
supercomputer time. For convenience, let's imagine that "a few dozen"
= 100. So, given that N^7 scaling, with 10 additional orders of
magnitude of computing power, a supercomputer using a month of
processing time may be able to model a picosecond of a system with
almost 2700 electrons. Excellent!
Except that this 2700 electron system has only a few hundred atoms in
it, and we've only really modeled a picosecond of its activity. This
is a far cry from modeling many minutes of the behavior of a system
with tens of millions of atoms (as an assembler surely must have).
Now, it's true that there are many many faster methods than CCSD.
Density functional theory in particular is a good combination of speed
and accuracy, and often scales at N^3, so in 50 years we'll be able to
model systems that are about 3000x the size of the systems we can
model with it today. Even with such a method, though, you're talking
about getting to the point of modeling systems with around a hundred
thousand atoms for just a picosecond of its existence. AND with a
faster method you have many more potential inaccuracies, as the
solution to the Schrodinger equation that you're producing is more and
more approximate.
So, my conclusion is that we are a long, long way from being able to
effectively design assemblers before building them. The alternative
to that design process - trial and error - will undoubtedly be
effective in producing some powerful technologies, but it will also be
riskier, and will give us far rougher and more approximate control.
You may be able to control the ultimate shape and performance of your
nano-grown car only as well as you can control the shape of a tree by
pruning and watering it.
mez
This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:22 MST