NANO: Institutional Safety

From: GBurch1@aol.com
Date: Sun Nov 14 1999 - 08:51:25 MST


In a message dated 99-11-13 09:48:56 EST, blenl@sk.sympatico.ca (David
Blenkinsop) wrote:

> Finally, now that I've mentioned security related matters, are there any
> really bright ideas for lessening possible dangers of nanotech, short of
> running far, far away, that is? Someone is going to tell me that AI's
> will take over and take care of it all, I'm sure, but I'm really more
> interested in whether organizations of relatively ordinary humans could
> somehow deal with this competently?

A small group of folks associated with Foresight got together and talked
about this question in February of this year. We produced a short paper,
which ought to be published soon - so said Ralph Merkle last night. The best
suggestion we could come up with was to try to emulate the process that
occurred with genetic technology in the 1970s, where a regime of
self-regulation developed and was slowly adopted into regulatory law. In
short, the group suggested prescriptions of release of freely autonomous
replicators into the environment and some technical safeguards against
mutation.

The group was not optimistic that these measures could completely and
reliably prevent a nanotech disaster. The best hope was that one could be
forestalled until defensive technologies caught up with and surpassed
offensive ones (which the technologists believed would precede effective
countermeasures - thus creating a "zone of danger" of indeterminate length).

     Greg Burch <GBurch1@aol.com>----<gburch@lockeliddell.com>
      Attorney ::: Vice President, Extropy Institute ::: Wilderness Guide
      http://users.aol.com/gburch1 -or- http://members.aol.com/gburch1
                         "Civilization is protest against nature;
                  progress requires us to take control of evolution."
                                           Thomas Huxley



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:46 MST