From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Mon Nov 25 2002 - 15:47:35 MST
I suppose I have to step in here --
On Mon, 25 Nov 2002, Ramez Naam wrote:
> From: Avatar Polymorph [mailto:avatarpolymorph@hotmail.com]
> > Most arguments about imminent timeframes for Drexlerian
> > self-reproducing nanotech relate to computing power.
Actually I don't think that is true or accurate. For example
both NIST and Zyvex have had to deal with the vibrations in
an AFM caused by trucks rolling by on the street outside.
That has little to do with computing power.
> > Noticeably, thinkers such as Drexler, Tipler and Moravec
> > tend to have convergencies in their figures because of
> > this.
Moravec and Kurzweil (yes), Tipler perhaps to a lesser extent.
Drexler doesn't make this leap however. The only way you can
argue Drexler thinks this is if you believe the computing capacity
enables "real" AI and real AI solves the design and construction
of a "real" assembler problem.
> Let me speak a bit more clearly. It is not at all clear to me that by
> 2050 we will have sufficient computing power on this planet to have
> designed an assembler that can build a car from the ground up.
Mez may be mixing a several complex issues here -- (a) designing an
assembler (~30x100 nm with a few million atoms); (b) designing a car
with a precise level of atomic detail (which is a *whole* lot of atoms);
(c) the systems problem of coordinating many (millions-billions) of assemblers
to assemble the car.
They are three distinct problems.
For example, the Fine Motion Controller is ~2600 atoms which is just
about an order of magnitude more complex than the most complex chemical
synthesis that have been done to date (e.g. Vitamin B12). I'm reasonably
confident that the synthesis of the subcomponents could be done today
by a reasonably good team of people. Assembling the subcomponents would
be a little bit trickier because we don't have good tools for manipulating
components on the 10-20 nm scale. You can view the FMC as ~1/1000 of the
problem of a real nanoassembler (perhaps more like 1/100 because there is
a lot of redundancy in most nanoassembler designs). So it isn't an
"impossible" problem if enough people were to focus on it.
> Except that this 2700 electron system has only a few hundred atoms in
> it, and we've only really modeled a picosecond of its activity. This
> is a far cry from modeling many minutes of the behavior of a system
> with tens of millions of atoms (as an assembler surely must have).
Mez is clearly the expert in this area (molecular modeling), *but*
as I think Hal was trying to point out you could do science the old
fashioned way (trial and error). Montemagno's work at Cornell and
now UCLA on molecular motors shows quite clearly that this approach
can work quite well.
> Now, it's true that there are many many faster methods than CCSD.
> Density functional theory in particular is a good combination of speed
> and accuracy, and often scales at N^3, so in 50 years we'll be able to
> model systems that are about 3000x the size of the systems we can
> model with it today.
The flaw I see in this argument is the assumption that you have to
model the system. That simply isn't the case. (I know Mez may *want*
to model everything but doesn't mean its absolutely necessary.) There
is a huge array of working nanotechnological parts (biotechnological actually)
that have a working track record. Humans can be quite clever about how
they use and adapt these for new functions.
And as Maxygen has shown one can clearly make use of the randomly
mutate and select strategy (though this should raise some eyebrows
before being used with "real" nanotech).
> So, my conclusion is that we are a long, long way from being able to
> effectively design assemblers before building them.
I'd modify this slightly -- "we may be a long way from being able to
run 'really' realistic assembler simulations".
> The alternative to that design process - trial and error - will undoubtedly
> be effective in producing some powerful technologies, but it will also be
> riskier, and will give us far rougher and more approximate control.
Agreed.
> You may be able to control the ultimate shape and performance of your
> nano-grown car only as well as you can control the shape of a tree by
> pruning and watering it.
Good analogy. Eric does go into some depth in Nanosystems with respect
to the "systems engineering" and "error tolerance" issues. It seems clear
that verification and pruning by the assemblers will be necessary --
but we have good examples in DNA polymerase and the ribosome for how
the biological assemblers manage to deal with these problems.
I'm not trying to assert that these won't be *very* difficult problems to
solve -- but I think the perspective that Mez has could be mitigated by
some other approaches.
Robert
This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:23 MST