Re: Iran pushes for nanotech

From: Brian Atkins (brian@posthuman.com)
Date: Sat Nov 10 2001 - 22:36:41 MST


Anders Sandberg wrote:
>
> On Fri, Nov 09, 2001 at 09:14:11PM -0500, Brian Atkins wrote:
> >
> > Quite simply, putting full blown nanotech into the hands of a single
> > dangerous person would make 9/11 look like a little pinprick. I think
> > your proposal of openly developing potential weapons of mass destruction
> > is utterly naive. It would be something like openly developing nukes
> > in a parallel universe where any matter is capable of being used as
> > the core. The end result would have been that everyone would have
> > built them, and instead of a cold war we would have had something much
> > worse.
>
> Would you please turn down your amygdala a bit? I did not say "let's

Maybe some people need to turn theirs up a bit? :-)

> give everybody guns to play with" but rather "I prefer the guns in the
> open and distributed so that we can force everybody to be accountable
> with them". Quite a difference. Have you read "The Weapon of Openness"

Frankly I don't see the difference. I must be completely missing the
point, but to me, "guns in the open and distributed" = "give everybody guns
to play with". We must have completely different concepts of what we are
trying to say. Do you mean the technical capabilities of the guns but
not the guns themselves when you talk about "open and distributed"? Or
the blueprints?

> by by Arthur Kantrowitz?
> (http://www.foresight.org/Updates/Background4.html) If not, please do
> that now.

I have read it, but it frankly makes no sense in this situation. Some of
the flaws:

The theory that by darwinian reasoning we can show that open societies have
won does not apply to future existential risks scenarios such as open-
developed nanotech that is cheaply reproducible by all. In fact the paper
says so: "This paper is concerned with the impact of secrecy vs. openness
policy on the development of military technology in a long duration peacetime
rivalry." The cold war is not applicable here.

Complaints that secret research tends to lead to unexplored avenues and
lack of free thinking is not a sufficient reason to allow open development
of nanotech. If the case for open development can be made that shows it
can be done safely, then and only then will it be ok to do so. If you can't
do it safely openly, and doing it closed makes it even worse, then the
only answer is don't do it at all.

Note that the author of the paper does not advocate openness for all things.

>
> > What exactly is your scenario for where terrorists have drextech and
> > yet don't cause severe worldwide damage? A nanotech immune system?
> > And what happens when they come up with a worm that breaches the
> > defenses? Humans have already proven themselves incapable of devising
> > a system that would prevent such disasters.
>
> Exactly what systems have been tried and failed? You make it sound like
> people have tried to build immune systems forever. In reality, what we

Well they basically have- I'm talking about something relatively simple
compared to building nanotech immune systems: computer security. Only
the problem with nanotech is you have to get it right from the very
start- you don't get 30 years to fool around. Accelerating the date at
which the technology is available to "threats" only makes the problem
more likely to occur as far as I can see.

Have you read "Bloom" by Wil McCarthy?
http://www.amazon.com/exec/obidos/ASIN/0345408578/

> need is different kinds of defenses against different threats,
> overlapping and evolving to match the pace of what people invent or
> imagine. It seems very likely that people would spend a sizeable
> investment on immune systems, institutions controlling nano use and
> various other methods of limiting risk, and the collective investment
> would be far larger than any investment in coming up with malicious
> nanotech.

The problem seems to me: isn't it only possible to really work on and
test nanotech defenses /after/ nanotech itself is "invented"? And if
it is invented openly and the "recipe" is known by all, then the bad
guys are going to have a wide open hole of multiple years with which
to do bad things while the defenses are still being set up.

Or are you proposing no assemblers be openly released until we have
thought up every possible offensive use and defense? You'll be waiting
a long time...

>
> It would be useful if we could estimate how dangerous a dangerous
> person with nanotech is - and when? You have to estimate not just his
> offensive capabilities, but also how dangerous they are compared to the
> defenses of society around him. Quite a few of the nano-scare debates
> seem to lack this kind of analysis.

See above. From what I see, in an open development process, the dangerous
person can utilize the technology for their purposes as soon as it is
technically feasible for them to do so, which will only be determined
by their research environment (budget, available complementary tech, etc.).
If there are not foolproof defenses in place before then, then the whole
notion of open development makes no sense. You have to convince me that
somehow (unlike the Internet/computing example) the defenses will always
be one or more steps ahead of the offense instead of the reverse.

The first Net worm occurred well before the Net became very widespread- in
fact you could say it was during the period that the Net was still being
created and researched. So apparently you can't trust that the academics
and others with access to it at that stage will all be good. Will the same
thing happen with nanotech?

>
> > Why fool around with such a possibility when leaving the nanotech
> > carefully centralized would probably still allow for 90% of the
> > usefulness? Do I really need a home assembler? What for?
>
> Because centralization means that somebody has more control over the
> nano than you have. Who do you trust with it? As I said, governments
> have a bad history of that. This is why we need oversight and
> accountability, and not just put all the responsibility at one group.
>

I would trust the USA government with it more than leaving it open for
terrorists to use. I don't mind paying an extra $10 for my nanocomputer
to be fedexed to me. What I care more about is surviving to be able to
use it. If you feel that the USA could not be left to develop it in
a carefully controlled fashion like they did with nukes, then perhaps
this really is so dangerous it should just be left alone?

-- 
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:11:55 MST