Re: design complexity of assemblers (was: Ramez Naam: redesigning children)

From: Anders Sandberg (asa@nada.kth.se)
Date: Tue Nov 26 2002 - 17:29:15 MST


On Tue, Nov 26, 2002 at 09:11:56AM -0800, Ramez Naam wrote:
> From: Anders Sandberg [mailto:asa@nada.kth.se]
> > The issue here seems to be levels of modelling. In computational
> > neuroscience we run into this all the time. We have good compartment
>
> > models that mimic real neurons pretty throughly, but take a
> > long time to run. So for large networks we instead use simplified
> > neurons, where the simplifications become more and more radical
> > as we scale things up. The important art here is to select the
> > right level for the job, and the right amount of simplification
> > so that you can trust the result.
>
> I can buy that. But think about the nature of the results you get
> from these simplified models. In other fields (and I'm guessing this
> is true in comp. neuroscience as well) simplified models end up giving
> sort of qualitative results that give you an idea for how the system
> generally behaves, but not exact results that tell you what precisely
> what output you'll get for a given input.

Yes, this is true.
 
> Now, combine this with the desire to use a swarm of assemblers to
> build a car. We can only model a swarm of assemblers if we use many
> many simplifying assumptions that strip out orders of magnitude of
> complexity. As a result, all our simplified simulation can tell us is
> that the swarm will probably build something car-like.
>
> Or at least, this is the problem I perceive.

I think there is a difference. To make a car-making swarm you don't try
to design 10^20 assemblers of a few billion atoms each and then simulate
the whole system at atomic precision. It would be as ridiculous as
putting together the car atom by atom by STM. Instead you design one
assembler "by hand", first by making the important core functions and
simulating them carefully, then by adding the extras like coating,
running the entire assembler on first simplified code and then more and
more exact code and tricky environments. If at any point you see a
problem, you backtrack and redesign. In the end you have an assembler
you are not just highly certain will work, you can also abstract it to a
black box with a certain interface to the surrounding world as long as
the environment keeps within certain parameters. Now these black boxes
can be simulated with little concern for their innards, and you can
start looking at local interactions between boxes and the local
workplace, repeating the process.

I think one can learn much from programming and software design here
(although I have no doubt that there will be plenty of hacks and crufy
code in nanotech too - which is seriously worrying). Creating strict
interfaces and abstraction barriers is a way of managing complexity, be
it code or atoms.

> One way I do imagine out of this is sort of hybrid fabrication
> techniques, where the overall manufacturing process is more of a
> classical, macro-scale one of putting various subcomponents together
> one by one in an assembly line, and assemblers are used to fabricate
> the specific components or perhaps to actually do the work of
> assembling the components, but only under the direct supervision of a
> very top-down control system.

Yes, this is likely the way to do it. Assemblers are general-purpose
devices, sensitive and "expensive" compared to simple specialized drones
that affix components and factories that move them about. The interfaces
of a drone or factory can be simple, and their behavior can also be made
far more predictable than a general assembler.

> This sort of fabrication process would be effectively a successor to
> current factory-based manufacturing techniques, but it could never be
> used for the kind of bottoms-up, plant-a-seed-and-watch-a-car-grow
> nano assembly that some dream about, and that has the greatest
> potential for massive social change.

Convergent assembly ought to be pretty impressive too. Imagine a square
meter of assemblers, spinning parts that are recursively added together.
I seem to recall that someone calculated a matter rate of a cubic meter
per 100 seconds (ah, here it is
http://www.zyvex.com/nanotech/convergent.html). But this system itself
ought to be bootstrapped in the same way. Start with a few seed
assemblers making support devices along which new assemblers are built,
setting up the bottom of the first layer. Then they start to manufacture
drones that get assembled parts which are put in layer two, and then
layer one and two start to build layer three. That ought to take roughly
similar time to the original manufacturing - a few minutes or so. So the
car-building system would first bootstrap for a quarter hour or so, and
then quickly make the car. Would probably look very neat.

Social change would happen even with fairly modest nanotech, because it
enables much shorter supply chains. Even if nano becomes very weak
compared to above scenarios it can still be revolutionary both in that
it can manufacture new kinds of objects, and in making the economies of
scale very different.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y


This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:25 MST