I don't have any particular problems with Freitas's title. Aside from
that, Hal Finney is right about everything, particularly including this:
> This is not an early or intermediate level of nanotech development.
> It would be among the most sophisticated nanotech applications imaginable.
> By the time such a global system could be designed, developed and put
> into play, gray goo could have wiped out the world ten times over.
> There seems to be a fundamental mismatch between the sophistication of
> the goodbots, who run an active immune system that checks every cell
> on the planet, and the badbots, who can't manage to operate even as
> efficiently as green plants.
And this:
> It also should be much more objective about the seriousness of the gray
> goo threat. Foresight seems to have made a political decision to downplay
> gray goo in the last several years, and this paper unfortunately seems
> to be consistent with that political position. Much more work needs
> to be done before we have a clear picture of the true scope of the gray
> goo threat. Robert Freitas has made an important contribution, but we
> are not yet in position to settle the matter.
I also have some comments remaining that haven't been entirely obsoleted
by Hal's remarks:
(1) Nanoweaponry fighting it out on the nanoscale arrives relatively
late in the game; in real life, the first two "nanoweapons" to have a
military impact would be diamondoid jet fighters and the like, followed
by saturation launches of vat-grown nuclear weapons. We'd have to get
through that without blowing ourselves up before we'd get the chance to
dust the biosphere. Even if Freitas's paper is entirely true in every
detail, it doesn't mean - as a policy conclusion - that there's a safe
path to nanotechnology.
(2) The battle strategies depicted here aren't twisted enough to depict
a real-life arms race. Would military replibots really wait around
passively to be swept up into neat little nets? If sentrybots in the
human body can detect replibots, can't the replibots detect the
sentrybots and hide? In some ways, the straightforward analysis of
nanowarfare is like mathematically analyzing a computer's transistors
and then concluding that, since it can reject incorrect passwords with a
success rate of 99.9999999%, the Internet is secure. Thanks to human
cunning and human error, we can't even protect our own computers with
any sort of certainty, even though we control the virtual "laws of
physics" and can scan or alter every byte of RAM; are we really supposed
to win a cracker's war in physical reality?
(3) Assuming that a single radiation strike produces device failure is
conservative when arguing a proof-of-possibility that nanomachines can
be constructed. It is extremely non-conservative when trying to set an
upper limit on the reproduction rate of aerovores. Given that we will
need to master the art of robust design at all levels of the system
architecture simply to allow tolerance and debugging of human errors in
nanotechnological designs - never mind radiation errors - I would be
inclined to run the numbers for N-cleave = 1000, 10000, or perhaps even
higher. (Has anyone tried running simulations of the effect of
radiation errors on atomically detailed designs, e.g. the Parts list at IMM?)
(4) "...an immediate international moratorium..." (from 9.0)
Why call attention to ourselves now, risking a media brouhaha and a
ban on all nanotechnology? By the time global ecophagy is a real
threat, everyone will have been screaming about diamondoid fighter jets
for the last year, and there'll be moratoriums all over the place.
A non-governmental-organization (i.e. Foresight) publishing
voluntary guidelines is good enough for the immediate memetic effect,
too; you just say, in a heavy, serious tone of voice: "The Foresight
Guidelines explicitly prohibit the development of replicators which can
operate in a biological environment; furthermore, the Foresight
Guidelines require that even vat replicators use a broadcast
architecture..." and so on.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/beyond.html Member, Extropy Institute Senior Associate, Foresight Institute
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:11:13 MDT