From: hal@finney.org
Date: Mon Mar 20 2000 - 18:27:04 MST
Eliezer S. Yudkowsky, <sentience@pobox.com>, writes:
> The one thought that kept running through my mind while reading
> _Nanosystems_ is that assuming that a single dislocated atom knocks out
> a mechanism is not a "conservative" assumption. It is vastly
> optimistic. Even assuming that a single dislocated atom causes a
> complete system crash is optimistic. Speaking as a programmer, if every
> error produced an immediate and simple crash, life would be a lot easier
> than it is. Assuming that a single dislocated atom causes the
> production of subtly wrong moieties that break or subtly corrupt the
> rest of the system; now that's realistic. Nanodebugging the declarative
> design errors will be nightmare enough; debugging radiation damage will
> be beyond imagining.
I don't think this is quite right, because what he tried to show was that
even if a part was considered broken if a single atom was out of place,
these malfunctioning units would still be an inconsequential fraction of
the whole. This is conservative because many single-atom displacements
would have only minor effects or no effects at all.
Drexler did not assume that all failures are passive, rather he showed
that the number of failures could be made so small that the amount of
trouble they can cause should be very limited. Once you have a model
for your error rate, you can look at your application and judge whether
even rather dangerous failure modes in the failing units are tolerable
if there are few enough of them.
A self-replicating system is an especially difficult case, because the
total mass of such systems may grow to an enormous degree over the
centuries, and so even an extremely small percentage of misbehaving
replicators may cause considerable trouble to those people who are
unlucky enough to be involved with their offspring, assuming that they
could reproduce for a long time without their problems being detected.
> There's no way - that I know of, anyway, correct me if I'm wrong - to
> read out the atomic positions inside a complex molecular structure. We
> have enough trouble designing computer systems that will operate in
> spite of programmer errors. Imagine the problem of designing a computer
> system that has to operate in spite of random bitflips, when there's no
> way to examine outputs directly, just take a few simple measurements.
At the time of assembly it is much easier to check for errors.
Repeated insertion and verification steps can reduce the error to a
very low level. It is true that after assembly is complete, internal
spontaneous or radiation generated errors may be difficult to detect.
Nevertheless functional tests of the entire unit can be used to
characterize and detect many failure modes.
> It seems to me that one of the major problems of nanotechnology will be
> designing systems that fail reliably (or systems in which any error not
> "measurable" to a checking mechanism must automatically maintain all
> design characteristics; after all, any error which shows up externally
> must be "measurable" in some sense).
Yes, or another approach would be to design systems which can be taken
apart relatively easily for inspection (destructive inspection). Then you
can sample a certain percentage of your product (whether replicators or
not) and determine whether they match the design specs.
Hal
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:27:33 MST