Re: Lyle's Law

From: Damien Broderick (damien@ariel.ucs.unimelb.edu.au)
Date: Sun Oct 06 1996 - 08:06:33 MDT


Hi everyone

>On Sat 05 Oct 1996 I mumbled:

> >I have to say I regret the tone of rancor that seems to
have
> >crept into these public postings of late.
>
and John K. Clark responded:

>I regret the rancor too. I can strongly disagree with somebody and still be
>quite found of them, but I'm only human (damn it). When somebody asks me to
>tell them exactly what is it about Nanotechnology manufacturing that makes it
>much easier to do than conventional manufacturing, and I take the time to
>respond with 6 fundamental reasons, and it is dismissed with 4 words, then
>I get mad

Quite so. I suggest we go back a couple of steps and see if this impasse
can be broken.

On Thu, 26 Sep 1996 Lyle Burkhead <LYBRHED@delphi.com> wrote:

>The "automated" factories that you speak of are only

>automated at one point in the manufacturing process.
                  
John K. Clark replied:

>And Nanotechnology without AI would be automated in the manufacturing
>process too, that is, it could duplicate any existing object. If you have a
>good description of an object, then you could make another one, and if
>you don't, then Nanotechnology can examine the object and get a detailed
>description.

But is this true? If true, is it (comparatively) easy? Good ole Drexler et
al [hi, Al!] make the following interesting and, I think, key point:

`If a car were assembled from normal-sized robots from a thousand pieces,
each piece having been assembled by smaller robots from a thousand smaller
pieces [...] then only ten levels of assembly process would separate cars
from molecules.' (UNBOUNDING THE FUTURE, 1991, p. 66)

In broad terms:

`Molecular assemblers build blocks that go to block assemblers [which] build
computers, which go to system assemblers, which build systems, which-- at
least one path from molecules to large products seems clear enough.' (ibid)

This is the Leggo model of nanomanufacture. It implies that sarin gas,
automobiles and raw (or cooked) steaks can be compiled in a ten-deep
hierarchy of gluing one atom (or molecule, or large chunk) onto another by
following a template.

Compare this with living replicators (the single existence proof available).
Genetic algorithms in planetary numbers lurch about for billions of years,
replicating and mutating and being winnowed via their expressed phenotypic
success, and at last accumulate/represent a humungous quantity of
compressed, schematic information. They do *not*, one must remark, entirely
*embody* it. That sketchy information gets unpacked via (1) a rich
information-dense environment (so that a string of amino acids folds up
spontaneously into a protein whose effectivity depends precisely on the new
`emergent' folded surface) and (2) a sequence of `darwinian' selection
processes at higher and higher hierarchical (`holonic') levels.

Remember, a one-cell human embryo uses on the order of a mere 100,000 genes
to kick-start a critter which ends up with trillions of dedicated cells. It
does so by using compression tricks: modularity (those trillions comprise
variants of just 256 distinct types of human cell), segmentation,
chemo-gradients, etc. While it's possible that Kauffman spontaneous
ordering principles also restrict and potentiate the permissible outcomes,
we don't yet have a clue (as far as I know) what the nano equivalents of
those rules would be.

J-P Changeux, Gerald Edelman and William Calvin, e.g., provide plausible
accounts of how the brain wires itself in such a stochastic fashion,
*without needing precise wiring diagrams* in the DNA recipe. Even so, it
takes 20 years to build and program a natural human-level intelligence, even
though all the elements are being assembled by nanofabricators as fast as
they can manage it. (And don't forget, all the cultural information that
has modulated our innate grammar templates is stored in extended semiotic
systems of immense complexity far surpassing anything the Human Genome
Project is going to hold in its completed files.)

Now we are told that contrived nanosystems will bypass all this
thud-and-blunder darwinian nonsense and cut straight to the chase. Instead
of mastering the huge number of coding steps used to specify and debug a
complex object, we might simply (`simply') scan the object at the atomic
level, file the 3D co-ordinates of each atom or molecule, and then have that
instruction set run through a zillion teeny nano assemblers, which will
allocate and time-share the job of first making smallest components, then
joining them into next-biggest chunks, and up the 10 steps to a gleaming,
atomically-precise Consumer Thing!

How many atoms was that again?

How much memory do you have in your hard drive? (Is that a stupid and
out-dated question, given that the process is massively paralleled, its data
stored in a zillion independent - but inter-communicating - nano jobbies?
Oh, your scanners are using compression tricks too? See below...)

John, I think this little excursion shows how much is begged by positing
bravely: `if you have a good description of an object'.

A more detailed examination of this issue would need to look at, e.g., Jack
Cohen and Ian Stewart's treatment of emergent attractors in dynamical
systems, real-world hierarchies regarded as Tangled Strange Loops (borrowing
from Douglas Hofstadter) rather than being arranged neatly as simple
step-after-step stairways. They observe: `Reductionism holds that the
high-level structure is a logical consequence of the low-level rules, and we
have no wish to dispute this... [But for *understanding* we] have to be
able to follow the chain of deduction. If that chain becomes too long, our
brains lose track of the explanation, and then it ceases to be one. But
this is how emergent phenomena emerge' (THE COLLAPSE OF CHAOS, Penguin,
1995, p. 438). True, they are making an epistemological and not an
engineering point, but it has practical implications for anyone hoping to
emulate hypercomplex systems from the molecule up, via explicit nano
procedures.

I'm inclined to think we'll get interesting results faster through massively
parallel darwinian simulations in digital configuration space (CAs and GAs),
or through superposed Deutsch quantum computations, or even via
artfully-coded DNA computations in a meaty broth (a process that's already
been used to bust fancy encryption schemes). The link from impossibly
complex algorithms generated by such means, and nano fabrication, will very
quickly escape our understanding, and as a result we will be obliged to
depend for reliable results (and our safety!) upon iterated trial & error
within impeccable confinement protocols, perhaps overseen by enhanced AI or
AI-augmented human watchdogs. None of this suggests Civilisation Made
Simple By Scan-And-Follow-The-Dots Nano.

John went on:

> Exactly what is it about Nanotechnology that is so different from
conventional manufacturing...?

[and offered six answers:]
                  
>1) The parts a car factory uses are very expensive, the parts that
> Nanotechnology uses are very cheap.

Assuming you can use them without a sublimely complex scanning & assembly
protocol, or can afford to wait for the atom-by-atom scan and have the
memory to write it to.
     
>2) A car factory uses many thousands or millions of different types of
>parts and you must learn how to operate all of them. At the most,
>Nanotechnology uses 92 different parts (the elements) but in the real
>world almost everything we know is made of less than 20, less than 10
>parts for life.

Sorry, I find this confused. By the time we're at level 3 or 4, we are back
to lots of different specialised sub-units, on Drexler's own account. Am I
missing something here?
                  
>3) All the many different parts the factory uses are fragile [but] There is
>no way you can damage the parts Nanotechnology deals with.

A scanned and reassembled steak is a steak. On the other hand, to take
optimum advantage of nano fabrication requires that we *won't* be using a
scanning protocol, since a faithfully atom-by-atom scanned car will have all
the drawbacks of its original. And if you mean to build a new, improved car
from the atom up, *you will need the kinds of intelligent, costly
programming that Lyle insists upon.* I don't see how this can be avoided,
except by waving the Magic SuperIntelligent Handy Obedient AI Genie wand.
This might come true, as we go into the Singularity, but like some other
(partial) skeptics I wonder if it's not akin to the wistful dream a bunny
rabbit has of the way >Rabbits will provide all their food and comforts for
them. That's true, they do, until the >Rabbits turn up one day with a
cylinder of unfriendly calcivirus.
                  
>4) None of the parts the factory uses are absolutely identical. [...]
> Atoms have no scratches on them to tell them apart.

By the time we've chunked up the 10 steps, `identical' parts might have
diverged again, especially if they're put together by anything resembling
chemo-gradient-guided self-assembly. (This is just an intuition on my part;
I hope I'm wrong.)

>5) Nanotechnology can manipulate matter without ever leaving the digital
> domain, and I think most of us know the advantage of that.

Again, if you can specify adequately at the digital level, well and good.
How many bytes was that, did you say? (But this is certainly a point in
favor of building from the lowest level up, if we can.)

>6) Most of the parts a factory uses are very complex [...] Nanotechnology
>is like building with leggo blocks, you can build structures of arbitrary
>complexity [...] It's easy to develop an algorithm to examine any leggo
>object and then build a duplicate, it's very far from easy to find an
>algorithm that would do the same with a car.

Of course you can copy a molecular *Leggo* but it's the monster algorithm
for connecting them into a car that's exactly what is at stake.

I apologise for the vast length of this post, but I think the topic requires
exceptional care (and even this much blather is just blundering on the
outskirts).

Please, folks, tell me why I'm wrong...

Best to all, Damien

                           

 

-------------------------------------------------------------------------
Dr Damien Broderick / Associate, Dept. English and Cultural Studies
        University of Melbourne, Parkville 3052, AUSTRALIA
                @: damien@ariel.its.unimelb.edu.au
        bio/biblio: http://www.vicnet.net.au/~ozlit/broderic.html
-------------------------------------------------------------------------



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:47 MST