From: Robert J. Bradbury (bradbury@www.aeiveos.com)
Date: Thu Sep 09 1999 - 02:17:03 MDT
On Wed, 8 Sep 1999, Matt Gingell wrote:
>
> I think this is a fascinating question. I don't have the chemistry
> background to evaluate claims about the mechanical possibility of
> nanomachines,
I'd suggest you look in the mirror, for that is what *you* are.
The *only* real question regarding nanomachines, is *how*
difficult is "hard/dry" (diamondoid/sapphire) nanotech vs.
"soft/wet" (protein/DNA/organic molecule) nanotech.
> but it seems to me that actually building one may
> well be an easier problem than writing the software.
I disagree from the perspective that a huge part of nanotech will
be nothing but repetition of the same movements, parts, etc.
There is very little software involved in most applications
of nanotechnology (though there may be lots of applications
in creating and verifying the designs).
> How on earth do you coordinate the behavior of trillions of
> automata with micrometer precision in a noisy, constantly
> changing environment, without any finely-grained way of determining
> position?
The problem of designing a complex nanomachine with a trillion
atoms and verifying it is harder than assembling it. That is
because the verification will almost certainly be a "proof" that
it will work. The assembly methods could be "directed" as semiconductor
lithography or MEMS now is or as "simple" as having the device
"self-assemble" from small sub-components (as things do in biology).
Now you certainly could design a device that would be difficult to
assemble, in that you could not reach the place where you needed
to put the last atom, but that would be a poor design. The design has
to incorporate the assembly technology just as all machines
do today.
As far as communication in "noisy" environments go, that is
very solvable. Your assembly instructions can be "tapped" in
on an isolated set of diamond/buckytube/etc "communication rods".
So long as the signal on the line, which can be very high, exceeds
vibration from surrounding activities, you will always get reliable
information.
> How do you deal with the problem of mutation?
Mutation of what? Your assembly instructions? These have ECC
(Error Correction Codes) in the host computer and potentially
in the transmission. You simply increase the size of the ECC code
until the error rate gets low enough that the assembly must
be reliable.
If you are dealing with the problem of an atomic reaction that
didn't work, you can have "detection & correction" machines
(just as DNA polymerase does in cells), or as I think Eric
points out in Nanosystems you have "stop-on-error" built into the
nanoassembly process. You should be able to detect whether the
reactions occured properly based on the mass of the thing left
on your assembler tip, the heat produced by the reaction, etc.
You also can go back and scan the assembled device with either
an AFM or an electron beam.
With most small parts you can verify they went together properly
simply by weighing them. Since we have atomic mass resolution
for weights, even a single atom missing shows up as a big error.
For example, most biotech labs today routinely do separations
of small DNA fragments (~20 bases) that are missing a base
using a technique called chromatography. The entire art of
DNA sequencing is based on the separation of DNA fragments
that differ in size by a single base. If the small subassemblies
are "perfect", and only "perfect" assembly of larger structures
is possible (i.e. jig-saw puzzle assemblies), then that pretty
much implies that you get perfect larger assemblies as well.
> It seems to me that the shear number of nanites, moles of them
> on a large project, make undetected corruptions inevitable.
For most large structures, i.e. like "nanolegos" for building
macro-scale "active" houses, the structure is going to be highly
regular and so there will be a tolerable defect level. Defect
containing materials may not be as strong or as fast or as pretty.
But they will still significantly outperform current materials.
If Hal wants the "Crystal" champagne style mansion (i.e. perfect),
it takes longer to build than if he is willing to settle for the
"Budweiser" Beer style mansion. But you and I looking at it,
will probably be unable to notice the difference. Humans
probably contain a *huge* number of assembly errors (a mole,
birthmark, etc. would be an examples of those we can easily observe)
but most of them are hardly noticable and have a very low impact
on the functioning of the machine.
>
> We can't get a the new air traffic control system right, or even
> manage to get baggage delivered correctly at Denver's new airport,
> and these are 'Hello World' compared to what nanotech would
> require. And calling this stuff 'safety critical' doesn't really convey
> the magnitude of the worst case scenario.
True, but we can for the most part get airplanes "right". It would
appear, that the problem may be dealing with large complex special
cases. If we are going to build a lot of them or are willing to
throw away a lot of nonfunctional parts (as nature does), then I
think we can get it right. There is probably a good argument for
building your nano-house before your nano-aircar because the
failures probably pose less risk. Unfortunately the nano-aircar
does more for you so it will probably come first. Fortunately
they will be able to do a lot of self-test-flying and they have
good failure modes (multiple parachutes), so the risk is probably
fairly low.
Once we have designs that can be built and are reliable we will
continue to improve on them. But unless the simulations are really
perfect, there will be things that don't work. The first diesel engine
blew up, but it worked well enough to convince people that it was
worth funding the development of a better model.
>
> I think it's fair to say that the development of software engineering
> processes has lagged eons behind machine advances and, more
> importantly, the growth in complexity of the systems we want to
> build.
Yes. It is interesting that we can build very complex machines that
function quite well but cannot build complex sequences of instructions
that function equivalently well. Is this a problem with simply
testing and software failure modes or is it something else?
>
> I recently got to visit Cray Research in Eagen, Minnesota, and was taken
> on a tour of the machine room.
Cool! I'm jealous.
> They run SETI at home at a low priority, so when sales isn't trying to
> optimize a potential customer's Fortran they're all searching for signal.
Such a waste, I need to go give them a lecture on the impact of
nanotechnology on the development of ET and how evolving nanomachinery
would be the coolest application of that unused horsepower.
Robert
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:05 MST