FYI: autoreplication (Re: Nano vs Macro Self-Replication (fwd)) (long)

From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Wed Sep 18 1996 - 07:34:52 MDT


---------- Forwarded message ----------
Date: Tue, 17 Sep 1996 12:42:11 -0400
From: Robert Freitas <rfreitas@calweb.com>
To: nanotech@cs.rutgers.edu
Newgroups: sci.nanotech
Subject: Re: Nano vs Macro Self-Replication

I. Self-Modification and Self-Replication

> From: Anthony G. Francis on Thu, Sep 5, 1996 5:43 AM
> Personally, I don't think we have a chance of building self-modifying
> and self-replicating nanomachines until we can do it with
> macromachines (or at least macrosystems), since we are far less
> experienced with nanotech.

(A) Mr. Francis is probably correct, regarding "self-
replication."

After all, macroscale (~1-10 cm) self-replicating machines have
already been designed, built, and successfully operated (e.g.
Jacobson, 1958; Morowitz, 1959; Penrose, 1959; references in
NASA CP-2255, 1982). (True, these were all very SIMPLE
replicators. But they DID replicate.) Additionally,
(nanoscale) replicating molecular systems have been designed,
constructed, and successfully operated by chemists. Only the
microscale realm remains "unreplicated" by human technology.
Since self-replication can now be done at the macro scale, we
are now conceptually prepared to implement this function at the
micro scale via nanotech, once our nanoscale tools become more
refined. So in that sense, I agree with Mr. Francis.

(B) Regarding "self-modification," Mr. Francis is almost
certainly incorrect. Of course, any machine can be programmed
or designed to randomly modify itself. (e.g., "once every
minute, unscrew a bolt and throw it away.") However, since
self-modifications that degrade performance or shorten life are
useless, I presume Mr. Francis is referring to an evolutionary
process in which a device responds positively to challenges
arising from its operating environment, modifying its hardware
or software to maintain or to enhance its fitness or
survivability in its changing environment.

To the extent such evolution is an undirected or blind process,
many trials involving minor changes are required to find, by
pseudo-random search, a single design change that will prove
helpful. Basic physical scaling laws mandate that, all else
being equal, physical replication speed is a linear function of
size. Thus the smaller the replicator, the greater the number
of trial offspring it can sire, and test for fitness, per unit
time interval. (This is because manipulator velocity is scale
invariant, while travel distance decreases linearly with size.)
Macroscale replicators (using blind trials) are far less likely
to be able to generate enough trial offspring to stumble upon
beneficial modifications, in any reasonable amount of time.
Microscale replicators, on the other hand, should be able to
generate offspring a million times faster (or more), thus are
far more likely to randomly turn up productive modifications,
hence to "evolve."

To the extent that such evolution is an intelligently directed
process, microscale replicators still enjoy the same tremendous
scaling advantage in computational speed. Microscale
replicators built using nanotech will use nanocomputers (e.g.
diamondoid rods with nanometer-scale features) to design, build,
and analyze their directed-modification offspring machines. Per
unit mass or per unit volume, these nanocomputers will operate
at least a million times faster than computers built using
macroscale tools (e.g. silicon chips with ~micron-scale
features) that will direct the macroscale replicators.

Of course, macroscale replicators can be evolved slowly using
macroscale computers. (Indeed, this is called "the history of
human technology.") Or, nanocomputers could be used to direct
the evolution of macroscale replicators, which will go a bit
faster. But clearly the theoretically fastest evolutionary
speed will come from nanoscale computers directing nanoscale
replicators.

Since self-modifying replicators should actually be easier to
implement at the nanoscale than at the macroscale, macroscale
experience with self-evolving mechanical systems is probably
unnecessary.

II. Infinite Regress and the Fallacy of the Substrate

> There is an argument that self-replicating machines can _only_ be
> built with nanotechnology, since components have to worry about
> quality control on their components, which have to worry about
> quality control on their subcomponents, and so on, leading to an
> otherwise infinite regress that only comes to an end when you
> get to atoms, which for identical isotopes can be treated as
> perfectly interchangeable.

This argument, as stated, has at least two fundamental flaws.

First, since self-replicating machines have already been built
using macrotechnology (see above references), it is therefore
already a fact that nanotechnology is NOT required to build
replicators. QED. End of discussion.

Second, the author of this argument assumes that "components
have to worry about quality control on their components." He
has fallen prey to what I call the "Fallacy of the Substrate."
I shall explain.

Many commentators, whether implicitly or explicitly, assume that
replication -- in order to qualify as "genuine self-replication"
-- must take place in a sea of highly-disordered (if not
maximally disordered) inputs. This assumption is unwarranted,
theoretically unjustifiable, and incorrect.

The most general theoretical conception of replication views
replication as akin to a manufacturing process. In this
process, a stream of inputs enters the manufacturing device. A
different stream of outputs exits the manufacturing device.
When the stream of outputs is specified to be identical to the
physical structure of the manufacturing device, the
manufacturing device is said to be "self-replicating."

Note that in this definition, there are no restrictions of any
kind placed upon the nature of the inputs. On the one hand,
these inputs could consist of a 7,000 oK plasma containing equal
numbers of atoms of all the 92 natural elements -- by some
measures, a "perfectly random" or maximally chaotic input
stream. On the other hand, the input stream could consist of
cubic-centimeter blocks of pure elements. Or it could consist
of prerolled bars, sheets, and wires. Or it could consist of
preformed gears, ratchets, levers and clips. Or it could
consist of more highly refined components, such as
premanufactured motors, switches, gearboxes, and computer chips.
A manufacturing device that accepts ANY of these input streams,
and outputs precise physical copies of itself, is clearly self-
replicating.

During our 1980 NASA study on replicating systems, one amusing
illustration of the Fallacy of the Substrate that we invented
was the self-reproducing PUMA robot. This robot is
conceptualized as a complete mechanical device, plus a fuse that
must be inserted into the robot to make it functional. In this
case, the input substrate consists of two distinct parts: (1) a
stream of 99.99%-complete robots, arriving on one conveyor belt,
and (2) a stream of fuses arriving on a second conveyor belt.
The robot combines these two streams, and the result of this
manufacturing process is a physical duplicate of itself. Hence,
the robot has in fact "reproduced." You may argue that the
replicative act somehow seems trivial and uninspiring, but the
act is a reproductive act, nonetheless.

Therein lies the core of the Fallacy of the Substrate:
"Replication" can occur on any of an infinite number of input
substrates. Depending on its design, a particular device may be
restricted to replication from only a very limited range of
input substrates. Or, it may have sufficient generality to be
able to replicate itself from a very broad range of substrates.
In some sense this generality is a measure of the device's
survivability in diverse environments. But it is clearly
fallacious to suggest that "replication" occurs only when
duplication of the original manufacturing device takes place
from a highly disordered substrate.

>From a replicating systems design perspective, two primary
questions must always be addressed: (1) What is the anticipated
input substrate? (2) Does the device contain sufficient
knowledge, energy, and physical manipulatory skills to convert
the anticipated substrate into copies of itself? Macroscale
self-replicating devices that operate on a simple, well-ordered
input substrate of ~2-5 distinct parts (and up to ~10 parts per
device, I believe) were demonstrated in the laboratory nearly 40
years ago. Japanese robot-making factories use the same robots
as they produce, inside the factory, hence these factories may
be regarded as at least partially self-replicating. I have no
doubt that a specialized replicating machine using an input
substrate of up to ~100 distinct (modularized) components (and
~1000 total parts) could easily be built using current
technology. Future advances may gradually extend the generality
of this input substrate to 10^4, 10^6, perhaps even to 10^8
distinct parts.

Of course, the number of distinct parts is not the sole measure
of replicative complexity. After all, a nanodevice which, when
deposited on another planet, can replicate itself using only the
92 natural elements is using only 92 different "parts" (atoms).

If you remain frustrated by the above definition of replication,
try to imagine a multidimensional volumetric "substrate space",
with the number of different kinds of parts along one axis, the
average descriptive complexity per part along another axis, the
relative concentration of useful parts as a percentage of all
parts presented on still another axis, the number of parts per
subsystem and the number of subsystems per device on two
additional axes, and the relative randomicity of parts
orientation in space (jumbleness) along yet another axis. A
given manufacturing device capable of outputting copies of
itself, a device which we shall call a "replicator," has some
degree of replicative functionality that maps some irregular
volume in this substrate space. The fuse-sticking PUMA robot
occupies a mere point in this space; a human being or an amoeba
occupies a somewhat larger volume. A nanoreplicator able to
make copies of itself out of raw dirt would occupy a still
larger volume of substrate space. We would say that a
replicator which occupies a smaller volume of substrate space
has lesser replicative complexity than one which occupies a
greater volume. But it is STILL a "replicator."

So if someone wishes to hypothesize that replicators "cannot be
built" on some scale or another, they must be careful (1) to
specify the input substrate they are assuming, and (2) to prove
that the device in question is theoretically incapable of self-
replication from that particular substrate. Using pre-made
parts is not "cheating." Remember: Virtually all known
replicators -- including human beings -- rely heavily on input
streams consisting of "premanufactured parts" most of which
cannot be synthesized "in house."

Because of their superior speed of operation in both thought and
action, there is little question that microscale replicators
constructed from nanoscale components are theoretically capable
of far greater replicative complexity than macroscale
replicators constructed of macroscale parts. But replicators
can be built at EITHER scale.

III. Linear vs. Exponential Processes

> Actually, it gets worse than that. The real world is full of dust, and
>friction wears down surfaces. The only way in which these manifestations of
> Murphy's Law can be handled is at their smallest pieces -- otherwise, smaller
> bits of dust get wedged in the gears of the repair tools, and the process grinds
> to a halt.

There are two arguments advanced here: (A) that dust particles
can insert themselves between moving surfaces, immobilizing
these moving surfaces and causing the machine to halt; and (B)
that frictional abrasion from environmental dust particles
degrades parts until these parts eventually fail.

(A) A correct system design will take full account of all
particles likely to be encountered in the normal operating
environment. Proper component design should ensure that dust
particle diameter << typical part diameter, and that moving
parts have sufficient compliance such that dust particles of the
maximum anticipated size can pass through the mechanism without
incident. It may be necessary to enclose critical component
systems within a controlled environment to preclude entry of
particles large enough to jam the mechanism. But this is a
design specification -- an engineering choice -- and not a
fundamental limitation of replicative systems engineering.

(B) A process such as frictional abrasion is a LINEAR
degenerative process. Assuming proper design, in which dust
particle diameter << typical part diameter, component-surface
error caused by abrasion slowly accumulates until some critical
threshold is surpassed, after which the device malfunctions and
ceases to replicate.

However, the generative (replicative) process will be
EXPONENTIAL (or at least polynomial, once you reach large
populations where physical crowding becomes an important factor)
if the replicators (1) produce fertile offspring and (2) have
full access to all necessary inputs.

Now, an exponential generative process can ALWAYS outcompete a
linear degenerative process, given two assumptions: (1) the
degenerative time constant (e.g. mean time to abrasive failure
of the replicator) ~> the generative time constant (e.g. the
replicator's gestation + maturation period), AND (2) the number
of fertile offspring per generation (e.g., size of the litter)
~>1. Assumption (1) should always be true, because a
"replicator" that breaks down before it has given birth to even
a single fertile offspring is a poor design hardly worthy of the
name. Assumption (2) should usually be true as well. Most
device designs I've seen involve one machine producing one or
more offspring. (Fertile offspring per generation can in theory
be <1 if members of many generations cooperate in the
construction of a single next-generation offspring -- the "it
takes a village" scenario.) Thus, even in the absence of simple
strategies such as component redundancy or a component "repair
by replacement" capability which would further enhance
reliability, component degeneration may slow -- but should not
halt -- the replicative cascade.

IV. Interchangeable Parts and Large Numbers

> In addition, all manufacturing depends on interchangable
> parts.... when you start making relatively small things such
> as in microtech, then an extra layer of atoms can make a big
> difference, and these differences keep the parts from being
> interchangable. The opposite tack, of making things really
> big, sounds nice, but you're going to need about a billion
> subcomponents, if I correctly remember the best estimates of
> Von Neumann's work.

Once you have clearly specified the input substrate you will be
working with, then parts size and the component compliance
becomes a design decision that is completely under the control
of the engineer. If your input substrate contains parts that
are likely to have a few extra layers of atoms, then your design
must accommodate that level of positional imprecision during
normal operations.

Almost certainly, a replicating machine may someday be built
that has a billion parts. However, a replicating machine can
ALSO be built with only three parts. (I have pictures of one
such device in operation, in my files.) The assumption that
vast numbers of parts are required to build a replicator -- even
a nanoreplicator -- are simply unwarranted. Indeed, I fully
expect that the first 8-bit programmable nanoscale assembler
(e.g. of Feynman Prize fame) that is capable of self-replication
will employ an input substrate of no more than a few dozen
different types of parts, and will be constructed of fewer than
1000 of these parts -- possibly MUCH fewer. These
premanufactured parts may be supplied to the assembler as
outputs of some other (chemical? biotech? STM?) nanoscale
process.

V. The Efficient Replicator Scaling Conjecture

> So it *might* actually be possible to build a macro-based
> self-rep system. But I suspect that it would be a lot more
> complicated.

I have formulated a conjecture ("the proof of which the margin
is too narrow to contain") that the most efficient replicator
will operate on a substrate consisting of parts that are
approximately of the same scale as the parts with which it is
itself constructed. Hence a robot made of ~1 cm parts will
operate most efficiently in an environment in which 1-cm parts
(of appropriate types) are presented to it for assembly. Such a
robot would be less efficient if it was forced to build itself
out of millimeter or micron-scale parts, since the robot would
have to preassemble these smaller parts into the 1-cm parts it
needed for the final assembly process. Similarly, input parts
much larger than 1 cm would have to be disassembled or milled
down to the proper size before they could be used, also
consuming additional time, knowledge, and physical resources --
thus reducing replicative efficiency.

If this conjecture is correct, then it follows that to most
efficiently replicate from an atomic or molecular substrate, you
would want to use atomic or molecular-scale parts -- that is,
nanotechnology.

VI. Lunar SRS IC Chips Are NOT Vitamin Parts!

> the self replicating machine shop (which still requires
> vitamin ICs) mentioned in the 1980 NASA Summer Study.

No! Unlike previous studies which assumed only 90-96% closure,
our theoretical design goal for the self-replicating lunar
factory was 100% parts and energy (but not necessarily
information) closure. This SPECIFICALLY included on-site chip
manufacturing, as discussed in Section 4.4.3 of the NASA report
and in Zachary (1981), cited in the report. Of the original
100-ton seed, we estimated the chipmaking facility would mass 7
tons and would draw about 20 kW of power (NASA CP-2255, Appendix
5F, p.293).

100% materials closure was achieved "by eliminating the need for
many...exotic elements in the SRS design...[resulting in] the
minimum requirements for qualitative materials closure....This
list includes reagents necessary for the production of
microelectronic circuitry." (NASA CP-2255, pp. 282-283)

Robert A. Freitas Jr.

Member, Replicating Systems Concepts Team
1980 NASA Summer Study

Editor,
Advanced Automation for Space Missions (NASA CP-2255, 1982)

***************************************************************************
Please email all technical problems to alexboko@umich.edu, not to the list.
http://www.us.itd.umich.edu/~alexboko/mlist.html is our web site.
ftp://us.itd.umich.edu/users/alexboko/th/ is our ftp site.
***************************************************************************



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:45 MST