From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sun May 14 2000 - 21:01:58 MDT
On Sun, 14 May 2000, Eliezer S. Yudkowsky wrote:
Quoting Hal:
> > This is not an early or intermediate level of nanotech development.
> > It would be among the most sophisticated nanotech applications imaginable.
> > By the time such a global system could be designed, developed and put
> > into play, gray goo could have wiped out the world ten times over.
I disagree strongly with this as I've said in another post. Our *current*
sensing technology is more than sufficient to detect very small hot spots.
(I'm sure Spike could comment here.) If there is reason to suspect
covert activities, we could easily launch equivalents of the Space
Infrared Telescope and turn them around to study earth. It goes without
saying that we are not even completely aware of the sophistication
of our existing detection capabilities. Considering the rapid
progress in photon detectors (i.e. astronomers couldn't buy for
any price a 3 million pixel CCD 10 years ago, and now we have them
in consumer devices), I have little doubt that our detection capabilities
will be up to the task.
With sufficient monitoring capacity we would know the *day* anyone
released badbots on the planet.
> > There seems to be a fundamental mismatch between the sophistication of
> > the goodbots, who run an active immune system that checks every cell
> > on the planet, and the badbots, who can't manage to operate even as
> > efficiently as green plants.
Not really, look at your own immune system, running around checking
the state of all the cells in your body. They are usually able to
police the neighborhood fairly effectively and only consume several
percent of the resources available to you.
It isn't that bad bots (or good bots) will not be able to operate
more efficiently than plants, it is that it will take a very long
time to develop such efficient machines *and* to operate much
more efficiently, they need to operate *slower* than plants.
> (1) Nanoweaponry fighting it out on the nanoscale arrives relatively
> late in the game; in real life, the first two "nanoweapons" to have a
> military impact would be diamondoid jet fighters and the like,
I see no significant advantages to "diamondoid jet fighters".
You can lighten the airframe somewhat but you still have to
have many components that cannot be made out of diamond because
it cannot withstand the high temperatures (engine turbine blades
for example). Everyone seems to think nanotech is "indestructable"
and that simply is not true. So diamond is 10x stronger than steel,
that just means your missles have to have 10x larger expolosive
yield to produce equivalent damage to a diamondoid jet vs. a steel jet.
The U.S. already makes the best jets in the world, with some competition
from the Russians. So what is the point of funding large efforts
to make nanotech jets? Given the computational capacity that nanotech
will allow you to put into missiles, I question whether jets are
viable weapons *at all* in a nanotech era.
> followed by saturation launches of vat-grown nuclear weapons.
The problem with nuclear weapons is you *still* need the uranium.
Robert has told me that Gina has a copy of a paper he did that says
"mining" gold using nanotech may be prohibitively expensive.
Mining uranium may be even worse.
To grow nuclear weapons you are going need (a) fuel and (b)
rad-hardened nanotech. I seem to recall that during the
development of the A-bomb we lost at least one scientist
at Los Alamos because he picked up a hunk of plutonium to
prevent an accident. What makes you think nanotech can
work in that environment? Nuclear fission works by
the release of neutrons, a kg of fissionable material
releases a lot of neutrons and doesn't matter *what*
the nearby material is, it is going to sustain damage.
> We'd have to get through that without blowing ourselves up before
> we'd get the chance to dust the biosphere.
We have enough nuclear and bioweapons today to blow ourselves up
and we aren't doing it. You *MUST* make a credible situation
that someone will (a) get nanotech before anyone else; (b)
use it to develop advanced weapons; (c) grow such large quantities
of those weapons and train people to operate them (or create
"intelligent" weapons) that they represent a threat; (d) do it all
without anyone realizing what is going on. Then after you have
met *all* of those critera, you have to present a case that
the threat would be so large and present itself so fast that
conventional weapons and tactics would be useless against it.
> Even if Freitas's paper is entirely true in every
> detail, it doesn't mean - as a policy conclusion - that there's a safe
> path to nanotechnology.
But it is a first attempt to show that all of the "crying wolf"
that people do *may* be crying at shadows on the wall rather than
real wolves. You can't *prove* that we aren't going to be turned
into dust tomorrow by an asteroid we didn't see. All you can do
is make reasonable estimates that its highly unlikely to occur.
That is about the best we can do with nanotech as well.
> (2) The battle strategies depicted here aren't twisted enough to depict
> a real-life arms race. Would military replibots really wait around
> passively to be swept up into neat little nets?
No, but if you are busy avoiding being swept up that is likely to
put a significant restriction on reproductive efforts. The nets
could simple herd the bad-bots into an area where they exhaust the
resources and as a result are forced to stop replicating.
> If sentrybots in the human body can detect replibots, can't the
> replibots detect the sentrybots and hide?
Well, "detection" in bio/nanoworlds "usually" involves touching
the enemy (unless the badbots are stupid enough to broadcast their
presence). Acoustic sonar may also be possible. So the likely
situation is the good and bad bots detect each other simultaneously.
However, the goodbot can broadcast a signal that rapidly recruits more
goodbots while the badbot may only have limited reinforcement available.
Its *entirely* a numbers game *just* as it is now with bacteria.
If I inject you with a cc of streptococcus, you are in *big* trouble.
However if I inject you with one streptococcus, the likelyhood of it
replicating much before your immune system takes it out is very low.
> [snip] and can scan or alter every byte of RAM; are we really supposed
> to win a cracker's war in physical reality?
It depends how important it is to you. We don't have good computer
security now because it is too much trouble for humans to remember
3 passwords. However we do have retina and fingerprint recognition
devices coming that will make it *very* hard to gain improper access.
The *reason* most medical procedures are so expensive is that
you have to do almost everything possible to avoid killing the
patient. Most computer systems can't "kill" their users and so
security failures are tolerated. When computers can kill their
users, failures won't be acceptable. The reason it took computers
so long to get into automobiles is that the automakers didn't
trust them to be reliable enough not to kill someone.
>
> (3) Assuming that a single radiation strike produces device failure is
> conservative when arguing a proof-of-possibility that nanomachines can
> be constructed.
I can solve this very simply. Researchers at Berkeley discuss
a high energy cesium ion source that can deliver 0.4 Amps of
ions in 0.37 microsecond pulses (delivering 400 trillion watts).
Any nanotech in this beam is converted to *slag*. If you think
nanotech has active electrostatic or magnetic defenses, you just
convert the beam to neutrons as is done in the Berkeley
Rotating Neutron Source.
See:
http://www.nuc.berkeley.edu/thyd/icf/IFE.html (and pages therein)
http://www.nuc.berkeley.edu/fusion/neutron/rtns.html
If you don't like those methods, you can hit them with a propane torch
that should reach temperatures where diamond begins to oxidize and
burn at 870-1070K (Nanomedicine, pg 296). If you want to kill the
badbots faster, buy yourself a hotter flame, e.g.:
Oxy-propane 4579F
Oxy-NG 4600F
Oxy-propylene 5240F
Oxy-MAPP 5301F
Oxy-acetylene 5720F
All of these are above the melting point of diamond in a non-oxygen
atmosphere of ~1800k (2780F).
And finally, if you want to be a little clever about it, you could
lure those little badbots between a couple of electrodes with a nice
carbon-rich snack and then zap them with enough voltage to disrupt
all of the bonds in their precious little bodies...
> (Has anyone tried running simulations of the effect of
> radiation errors on atomically detailed designs, e.g. the Parts list at IMM?)
I know of nobody who has tried this. I've been talking about it
for a couple of years because I want the data to determine probable
lifetimes for space-probe-bots. The code does exist at LLNL and the
other defense labs to run the simulations at the atomic level.
However I'm unsure what would be required to get access to it for
testing nanodesign radiation hardness.
However, you can make some estimates based on bio-sterilization
procedures. Standard food sterilization has to "guarantee" there
are no microbes capable of reproducing once the procedure is complete.
To do this they drop the foods in a tank and expose it to a hefty
radiation dose, presumably high enough to guarantee several double
strand breaks in the chromosomes of any bacteria in the food.
So a moderately safe nanoshield would consist of a couple of
lead walls filled with some cobolt-60 (or equivalently strong
radiation source) buckshot. Since we are on the topic, I'll
ask Robert what the "minimal" wall design is that maximizes human
safety inside the walls but is guaranteed to sterilize nanobots
that try to come through it.
> Why call attention to ourselves now, risking a media brouhaha and a
> ban on all nanotechnology?
A ban will not work. You can't regulate small technologies that
can be constructed from relatively abundant materials. How do
you ban AFM construction? Ok, lets regulate the sale of all
peizoelectric crystals. Hmmm, doesn't that mean I have to ban
the sale of all elements from which those crystals can be grown?
Right!
> By the time global ecophagy is a real threat, everyone will have been
> screaming about diamondoid fighter jets for the last year, and there'll
> be moratoriums all over the place.
Why? If the nations of the free world see this coming, the probability
will be that they all will have diamondoid fighter jets within a few
years of each other. You don't think the EU or China or Russia is
going to sit by twidling their thumbs while the U.S. develops
this technology do you? Technology is technology, the economic
advantages benefits of the technology used "properly" are very
clear, so the development is likely to occur at relatively similar
rates around the world (in the more developed nations).
> A non-governmental-organization (i.e. Foresight) publishing
> voluntary guidelines is good enough for the immediate memetic effect,
> too; you just say, in a heavy, serious tone of voice: "The Foresight
> Guidelines explicitly prohibit the development of replicators which can
> operate in a biological environment; furthermore, the Foresight
> Guidelines require that even vat replicators use a broadcast
> architecture..." and so on.
You want to go further than that, you want to make people aware that
it is a serious problem, but one that can be defended against with
a safety level that corresponds to your investment in sensors and
defensive devices. Wouldn't it be better to put into place (legally,
on a global basis) things like the NIH guidelines that work to prevent
accidents or the bio/chemical weapons treaties that managed to begin
putting the bad-genies back into the bottle shortly after it got out.
You want to say -- "Look, here are the good things and here are the
bad things. If we can work together we could perhaps avoid wasting
too many resources on dealing with the bad things." For example,
how about entirely open-source defensive designs developed by
international teams?
Robert
========
On Sun, 14 May 2000, Eliezer S. Yudkowsky wrote:
> I don't have any particular problems with Freitas's title. Aside from
> that, Hal Finney is right about everything, particularly including this:
>
> > This is not an early or intermediate level of nanotech development.
> > It would be among the most sophisticated nanotech applications imaginable.
> > By the time such a global system could be designed, developed and put
> > into play, gray goo could have wiped out the world ten times over.
> > There seems to be a fundamental mismatch between the sophistication of
> > the goodbots, who run an active immune system that checks every cell
> > on the planet, and the badbots, who can't manage to operate even as
> > efficiently as green plants.
>
> And this:
>
> > It also should be much more objective about the seriousness of the gray
> > goo threat. Foresight seems to have made a political decision to downplay
> > gray goo in the last several years, and this paper unfortunately seems
> > to be consistent with that political position. Much more work needs
> > to be done before we have a clear picture of the true scope of the gray
> > goo threat. Robert Freitas has made an important contribution, but we
> > are not yet in position to settle the matter.
>
> I also have some comments remaining that haven't been entirely obsoleted
> by Hal's remarks:
>
> (1) Nanoweaponry fighting it out on the nanoscale arrives relatively
> late in the game; in real life, the first two "nanoweapons" to have a
> military impact would be diamondoid jet fighters and the like, followed
> by saturation launches of vat-grown nuclear weapons. We'd have to get
> through that without blowing ourselves up before we'd get the chance to
> dust the biosphere. Even if Freitas's paper is entirely true in every
> detail, it doesn't mean - as a policy conclusion - that there's a safe
> path to nanotechnology.
>
> (2) The battle strategies depicted here aren't twisted enough to depict
> a real-life arms race. Would military replibots really wait around
> passively to be swept up into neat little nets? If sentrybots in the
> human body can detect replibots, can't the replibots detect the
> sentrybots and hide? In some ways, the straightforward analysis of
> nanowarfare is like mathematically analyzing a computer's transistors
> and then concluding that, since it can reject incorrect passwords with a
> success rate of 99.9999999%, the Internet is secure. Thanks to human
> cunning and human error, we can't even protect our own computers with
> any sort of certainty, even though we control the virtual "laws of
> physics" and can scan or alter every byte of RAM; are we really supposed
> to win a cracker's war in physical reality?
>
> (3) Assuming that a single radiation strike produces device failure is
> conservative when arguing a proof-of-possibility that nanomachines can
> be constructed. It is extremely non-conservative when trying to set an
> upper limit on the reproduction rate of aerovores. Given that we will
> need to master the art of robust design at all levels of the system
> architecture simply to allow tolerance and debugging of human errors in
> nanotechnological designs - never mind radiation errors - I would be
> inclined to run the numbers for N-cleave = 1000, 10000, or perhaps even
> higher. (Has anyone tried running simulations of the effect of
> radiation errors on atomically detailed designs, e.g. the Parts list at IMM?)
>
> (4) "...an immediate international moratorium..." (from 9.0)
> Why call attention to ourselves now, risking a media brouhaha and a
> ban on all nanotechnology? By the time global ecophagy is a real
> threat, everyone will have been screaming about diamondoid fighter jets
> for the last year, and there'll be moratoriums all over the place.
> A non-governmental-organization (i.e. Foresight) publishing
> voluntary guidelines is good enough for the immediate memetic effect,
> too; you just say, in a heavy, serious tone of voice: "The Foresight
> Guidelines explicitly prohibit the development of replicators which can
> operate in a biological environment; furthermore, the Foresight
> Guidelines require that even vat replicators use a broadcast
> architecture..." and so on.
> --
> sentience@pobox.com Eliezer S. Yudkowsky
> http://pobox.com/~sentience/beyond.html
> Member, Extropy Institute
> Senior Associate, Foresight Institute
>
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:28:37 MST