Re: Nanotech control systems (was Re: Transhuman Beach Party)

From: Robert J. Bradbury (bradbury@www.aeiveos.com)
Date: Thu Sep 09 1999 - 17:29:44 MDT


On Thu, 9 Sep 1999, Matt Gingell wrote:

>
> Do you think we're likely to understand the functioning of cells well
> enough that we could program them to build complex structures, and
> create specialized, human designed, offspring in the foreseeable
> future?

If by "complex structures", you mean organs, by 2010, I would say
yes, definitely for simple organs such as skin. More complex organs
such as the kidney or the heart, I'd say "maybe". A *key* point to
remember is that you don't have to "understand" it to make it build
an organ. Cells already have working programs to build organs --
what you want to do is select the minimal subset (so you can produce
it cheaply) that *still* works. Its like, if I give you the Linux
source code, can you select from it a set of subroutines that will
read and write to the disk? Yes, easily. Whats more, you could
write a program to *randomly* select subroutines from the source
code and sooner or later you would end up with the set that
reads & writes to the disk. All that is necessary is a "test"
that can verify that the reads/writes occur (or an organ is
produced).

> How well is the process of human development from a single
> cell understood?

It depends on the tissue. For blood cells I would say it is very well
understood and most of the transition factors are known. For other
tissues the picture is much less clear.

> Do we understand how a cell knows it's in the right
> place to become a liver cell, etc.

In some tissues, such as the brain, parts of the spine, etc. yes.
More importantly we *know* what these factors look like at a DNA
sequence level and they are almost all "similar" (nature works by
cut, paste and edit). So once we have the genome sequence, pulling
them all out is week exercise for a computer, then we will have
a fair amount of work to test them and see what they really do.
But the final result will be that we will have all the factors
that flip the switches.

> Do we understand how we go from strings of amino acid specifiers
> in DNA to more complex cellular structures?

Harder question. We understand many of the higher level components
that make up cellular structures but are still decoding the subcomponents
that operate at a very fine level. After all we only have the a.a.
sequnce of ~20% of the human genome today, so we can't know it all.

> When I think about the nanotech software question, I imagine an
> automaton with a general-purpose computer and mechanisms to
> reproduce and drop/detect some alphabet of chemical messages.
> Given this idealized scenario, I try to image how to go about
> designing a program for automaton 0 that will eventually lead to
> some interesting configuration of N automata. (shapes, patterns,
> images, etc.)

I think this may be an excessively complex way to think about the
problem. A nanoassembler executes a very small set of motion
operations with a small set of feedstock materials (literally
pick up X-feedstock to perform a reaction that puts X in the
specified position in the atomic matrix). [This is what your body
does, only the "pickup" step is generally done by diffusion.]

Thinking about this in terms of automata may be confusing the
process. Think about it in terms of optimization (within reason)
of the assembly process --
   Select AssemblerN from all-assemblers where
      AssemblerN-Available-Feedstock = X and
      Minimum-Of(AssemblerN-Current-Position - Desired-X-position)

If you simply repeat this for each atom, you will not get the
minimum assembly time, but you will get a "good" assembly time.
If it turns out to be highly suboptimal, then you need an algorithm
that attempts to minimize the overall assembler arm movement or
optimizes reaction time overlaps. This is virtually identical
to the instruction & execution unit scheduling done by optimizing
compilers for computers today. It may never be absolutely
optimal, but it will come pretty close.

>
> These are very hard problems to think about - the flow of the system
> is extremely complex and dynamic. To predict the behavior of the
> system at some point in time, you need to know the complete state
> of the system, which requires knowing how the system behaved in
> the previous time-step, etc. The feedback process leads to very
> simple programs with extremely complicated behavior.

I don't think so. The assembly of a specified set of a billion
atoms seems not too much different from executing a billion instructions
on a computer. You could do them one-by-one with no problems.
You could do them in highly parallel fashion if no conflicts
are present. For nanoassembly you want to make sure that two
nanoassemblers don't attempt to occupy the same space at the same
time. That is no different from making sure that two processors
don't attempt to modify the same location in memory on a symmetric
multiprocessor (SMP) today.

These problems are well understood in computer science at this point.

> I really meant random bit-flips in the machine's local/working
> memory.

This should never happen. Just as you have ECC in computers, you
*have* to have the equivalent in nanotech. Eric devotes some attention
to the problem in Nanosystems (discussing radiation), Robert will devote
some more in Nanomedicine. Redundancy and error checking minimize
these problems.

> It seems to be the small you make something the more
> vulnerable it is likely to be to cosmic rays, etc.
Yep.

> You would certainly place some kind of error correction mechanism
> in that memory, but the shear number of possible events (number
> of nanites, local memory size, length of time) makes this a rather
> expensive proposition.
Expensive at the nano-level, cheap from the macro level.
Redundancy is the key.

> Maybe you need antibody nanites that float around checking the
> program of anyone they bump into and destroying mutants.
I'd consider ECC or the "weighing" of the output machine to be this.

> I think there is something different about software. It's dynamic
> nature makes is much more difficult (if not possible) to analyze than
> more traditional engineering tasks, and the range of things you can
> do in software is much larger and much less well understood.

Perhaps. It may be that in mechanical engineering you solve the
"accidents" by making the materials thicker, stronger, etc.
Software on the other hand suffers from increased complexity
when you try to engineer solutions with "add-ons". Nature
deals with the defective "add-ons" by rapidly eliminating them.

> I said:
>
> > Such a waste, I need to go give them a lecture on the impact of
> > nanotechnology on the development of ET and how evolving
> > nanomachinery would be the coolest application of that unused horsepower.
>
> Well, it's not a waste if we find something...

The key word is *if*. You cannot guarantee you will find aliens
signaling to us. The probability is low. On the other hand
I can (probably) guarantee that if you search the space of
possible nanotech designs, you will find something. Why?
Because we have *no* example of aliens signalling us while
we have millions of "working" nanotech designs (in biology).
Care to make a bet on the relative odds?

Robert



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:05 MST