Nanotechnology

From: John K Clark (johnkc@well.com)
Date: Tue Oct 08 1996 - 22:38:52 MDT


-----BEGIN PGP SIGNED MESSAGE-----

First of all I want to say that Damien Broderick's well written post on Oct 8
was a joy to read and respond to. I don't think Damien is correct about his
skepticism of Nanotechnology, but he certainly shows us how to argue
intelligently and forcefully and yet remain civilized. I think we should all
follow his example, including me.
                  

>Damien quoting Drexler: `If a car were assembled from
>normal-sized robots from a thousand pieces, each piece having
>been assembled by smaller robots from a thousand smaller
>pieces [...] then only ten levels of assembly process would
>separate cars from molecules.' (UNBOUNDING THE FUTURE, 1991,
> p. 66)
                  

I re-read that passage in Drexler's popular book, it seems to me that he was
trying to demonstrate to the general reader how small an atom was and how
many it would take to build a car, about 10^30. He was also trying to
demonstrate that even this huge number is not out of reach if you have
command of exponential processes, and with Nanotechnology you would with self
replication. In his example you can tackle that number in just 10 steps
(1000^10 = 10^30). I don't think Drexler was trying to suggest that the best
way to build a car was to do it in stages, each using 1000 parts until you
get to atoms at stage 10.

On page 60 of "Engines of Creation" he talks about building (growing?) a
rocket engine in one step using general purpose assemblers. You place a
"seed" nanocomputer that has the complete plans for the engine in the center
of a tank. The tank is filled with a fluid full of assemblers. The seed
nanocomputer has places on the outside of it for assemblers in the fluid to
stick to it, the seed sends instructions to the assemblers connected to it,
telling those machines what their position is in relation to the seed, and
telling them how many other assemblers they should snag, and what their
position must be, and what instructions to send them. The chain keeps
growing and soon an assembler scaffolding for the entire rocket engine is in
place, with each assembler knowing its position relative to the seed.
Now it's time to begin manufacturing. Drain the tank of its fluid full of
excess assemblers and pour in a new fluid that has chemical fuel (food) for
the assemblers and raw materials, mostly carbon, for the construction.
A few hours later your rocket engine is finished.

On the other hand, if you needed a huge number of rocket engines very quickly
you might not want to rely entirely on general purpose assemblers and make
everything in one step. Some parts, like carbon fibers, would be used in
almost all the things you would want to make, so it would be more efficient
to make them with special purpose assemblers rather than use the more
flexible but slower general assemblers. Also, because you'd need to do it a
lot It would also be helpful to have a specialized tool to stretch the sleeve
of a bearing so you can fit it over a shaft. Drexler talks about mass
producing modular building blocks that you use frequently, like identical
CPU's and memory arrays. Most of these blocks would be less than 10
nanometers but a few could be as large as 1000 nanometers (10^-6 meters),
still far too small to see.
                 

>Compare this with living replicators (the single existence
>proof available). Genetic algorithms in planetary numbers
                 

One existence proof is all that is needed, and huge numbers are no problem
for Nanotechnology.

                  
>lurch about for billions of years, replicating and mutating
>and being winnowed via their expressed phenotypic success
                

"Lurch" is a good word for it. The reason it took billions of years to
produce us is that evolution is so dumb. Even the crummiest designer is
brilliant compared to a random cosmic ray hitting a strand of DNA causing a
mutation.

>That [DNA] sketchy information gets unpacked via (1) a rich
>information-dense environment
                    

But that's no different than Nanotechnology because software is always
useless without a computer to run it on. Digital DNA information needs to be
run on a mitochondria to be implemented and produce an output, a protein.
In Nanotechnology digital information also needs to be implemented, in an
assembler in this case, to produce an output, a physical structure of any
type not just a protein.
                    

>the brain wires itself in such a stochastic fashion,
>*without needing precise wiring diagrams* in the DNA recipe.
>Even so, it takes 20 years to build and program a natural
>human-level intelligence
                    

I don't understand the use of the phrase "even so" in this context. It takes
a long time exactly because you don't have a precise wiring diagram at birth,
it takes 20 years of research and development that we call education for the
body to find such a diagram. When a brain crashes catastrophically, and up
to now every single one has, there is no backup and all that R and D is lost.
The next brain must re-invent the wheel, there has got to be a better way.
                    

>Now we are told that contrived nanosystems will bypass all
>this thud-and-blunder darwinian nonsense and cut straight to
>the chase.
                 

Yes.
                     

>we might simply (`simply') scan the object at the atomic
>level, file the 3D co-ordinates of each atom or molecule,
>and then have that instruction set run through a zillion
>teeny nano assemblers
                     

Yes.
                     

>How many atoms was that again? How much memory do you have
>in your hard drive?
                     

In section 12.7 of Nanosystems Drexler shows how to store 10^25 bits per cubic
meter in fast RAM, or 10^28 bits in a slower (but still fast by our standards)
"tape" storage. It should also be noted that the arm on a Nanotechnology
based Assembler would be 50 million times shorter than a human arm, and that
means it could move back and forth 50 million times faster than a human arm.
Also, just one pound of carbon would contain about a billion billion Nano
Robots with such arms.
                  

>Jack Cohen and Ian Stewart's treatment of emergent attractors
>in dynamical systems, real-world hierarchies regarded as
>Tangled Strange Loops (borrowing from Douglas Hofstadter)
>[...] True, they are making an epistemological and not an
>engineering point
                  

Hofstadter is one of my heroes by the way, but as you point out, he was
talking about epistemology not engineering. You don't need to understand why
something works in order to duplicate it. You can parrot a foreign language
even if you don't know a word of it. A chip designer may have no idea how a
program running on his chip operates. A typist may have no understanding about
what he writes, some would say that if nothing else my post proves that point.
                  

>I'm inclined to think we'll get interesting results faster
>through massively parallel darwinian simulations in digital
>configuration space (CAs and GAs), or through superposed
>Deutsch quantum computations, or even via artfully-coded DNA
>computations in a meaty broth (a process that's already been
>used to bust fancy encryption schemes).
                  

That would certainly produce interesting results, whether it will be faster
than other approaches I'm not sure. Quantum Computers may not even be
possible ( but probably are, I think) and are certainly much more exotic than
Nanotechnology.
                  

>The link from impossibly complex algorithms generated by
>such means,

If such algorithms exist then they are not impossible.
                  

>nano fabrication, will very quickly escape our understanding
                  

If by "our" you mean 3 pound brains made of meat then I agree, but again, you
don't need to understand why something works to manufacture it. Even now we
can improve things without knowing a lot about it. No one man can have a
deep understanding about exactly why Windows NT does what it does on all
occasions, yet version 4.0 is better than version 3.51. Besides, I don't
think 3 pounds of meat will be the smartest thing around forever.
                 

>if you mean to build a new, improved car from the atom up,
>*you will need the kinds of intelligent, costly programming
>that Lyle insists upon.
                 

Mostly true, but I don't think it would be all that costly . As I said,
although not absolutely necessary (evolution), it's very helpful to know what
you're doing if you want to improve something. I think I made it clear in my
post that I was just referring to manufacturing.
                 

>By the time we've chunked up the 10 steps,`identical' parts
>might have diverged again
                 

Perhaps, and that's why I don't think we'd ever use so many steps that we
could no longer be confident that the parts were absolutely identical, or
have so many different types of parts that it got confusing. I'm not sure
how many steps that would be, but I have a hunch it would be less than 10.
                  

>especially if they're put together by anything resembling
>chemo-gradient-guided self-assembly.
                  

Chemo-gradient would be a very crude method of informing an assembler of its
position, it may be good enough for life but I doubt it would be used much in
Nanotechnology, except perhaps in the very earliest examples.
                              

>>Nanotechnology can manipulate matter without ever
>>leaving the digital domain, and I think most of us
>>know the advantage of that.

>Again, if you can specify adequately at the digital level,
>well and good. How many bytes was that, did you say? (But
>this is certainly a point in favor of building from the
>lowest level up, if we can.)
                              

If you made up a list that contained the type and position of every atom in a
car this list would contain a HUGE amount of redundancy. If you put an iron
atom in a particular spot then it's almost certain that the atom you will
want to put next to it will also be an iron atom. We can get rid of this
massive redundancy and reduce the size of the list by an astronomical amount
by using the same sort of algorithms we use today for data compression in ZIP
and GIF files.
             

                                             John K Clark johnkc@well.com

-----BEGIN PGP SIGNATURE-----
Version: 2.6.i

iQCzAgUBMlsDr303wfSpid95AQFMeATveZj57j4i6o1bUW1T+dvoiiCA5wa7j64O
DJ7WvU5nsZC602p/hI+PC03pfloVpmU4SuOA5HHh0WD8mGNWxTTeLlcfKaDYA+gy
OS9tkkwIZf381BkvHKwmZiXrOUkNiqXzLilu5H+BwwUHrhO54Q0faNhLhpNHDjAu
95iLF4iPSC+P+o9NkUhZRrV+5XdjkEarD0W9g/mdOPwtMWWJdS4=
=jsL3
-----END PGP SIGNATURE-----



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:47 MST