>From: Henri Kluytmans <hkl@chello.nl>
>
>In a posting from March 10th, Zero Powers wrote:
>
> > When talking nano, the best defense is an infallible, impenetrable and
> > non-hackable offense. For your plan to work, your offense would have
> > to be practically perfect and everyone else's offense would have to be
> > imperfect.
>
>No, you DONT need a *PERFECT* defense.
>
>Lets assume an active shield defense made out of a large amount of
>separate bots. Then what you need, to overcome any enemy attack, is :
>
>More resources at standby than the attacker, and a way to detect if
>your own defense bots are not secretly modified by the attacker.
>
>Your defense bots may be less efficient in destroying and detecting
>enemy bots, as long is this is compensated by a larger amount of
>defense/attack resources.
Maybe I'm dense, but it seems to me that if the attacker has *replicating*
bots, your shield would have to be so tight as to prevent even a single bot
from getting through your defenses. Because if just *one* gets through, it
can fairly quickly become billions. And then you are in trouble, no?
And how would you possibly build a shield so tight that not a single
attacker could get through without turning every available atom into part of
your shield? Obviously if every available atom is part of the shield, there
is nothing left for the shield to shield.
>For example if you have a hundred times as many defensebots (and the
>bots are of the same order of size as the attacker) then all your
>defense bots need to do is seek out and destroy the enemy bots.
>Because you have a hundred times the amount of bots, you dont need a
>perfect way to destroy the enemy bots. Your defense bots only have to
>be more than 1% effective in destroying the enemy bots than vice versa!
That seems like a big if. How can you be confident that the "good guys"
will have such a huge disparity in their favor?
>I think its quite unlikely that (assuming you try staying informed
>about the latest technology) your enemy can get ahead in technology
>by such a large amount that its bots are a hundred times more
>effective at attacking. (It could even be fundamentally impossible
>to make a hundred times more effective attacking bots.)
Seems to me that these issues will really assert themselves as we approach
the Singularity. And, by definition, you cannot say what will happen past
that point. When you throw things into the mix like self-evolving machine
intelligences and the resulting acceleration of the exponential growth of
technology, a .001% technological advantage by your enemy could well spell
your doom.
And remember, these will be *knowledge* based weapons and defenses. It
won't take a huge economy or brick-and-mortar infrastructure to take
advantage of them. All it takes is a revolutionary *idea* to give you a
significant advantage over the competition. Look at how Craig Ventner got
the drop on the huge international Human Genome project with what turned out
to be a very good idea. He didn't have nearly the human or financial
resources of the Human Genome project. All he basically had was a good
idea, and now he's got the international genetics community on the run!
>Of course, you will have to keep an eye on your potential enemies,
>to check if they are not trying to create more resources than you
>can overcome with your own defense resources available.
Chances are your enemies will not take kindly to your keeping an eye on
them, and will use all their available resources to prevent you from doing
that.
>The most difficult detail could be in the mechanisms used for
>detection of modification to your own defense bots. Maybe some
>kind of public key certificate system could be used, where the
>secret key is destroyed by the bot when its integrity is expected
>to be violated. (When all public key systems are rendered useless,
>because of for example quantum computers, a secret key system
>with some heavily guarded central lookup systems could be used
>in stead. However this last option is more vulnerable and less
>efficient.) These will be used together with external
>recognition mechanisms.
>
>However all these detection mechanisms dont need to have
>perfect 100% efficiency. In case of a 100 times more resources
>it would already be sufficient to have more than 1% efficiency!
>(When assuming the same destruction efficiency as the enemy.)
Again you are assuming a huge resource disparity between you and your enemy.
What do you base that on?
>Im fairly confident that detection mechanisms with a certain
>limited efficiency are possible.
>
>So ultimately it all comes down to who has the most resources
>(e.g. the most defense/attack bots at hand). And the one who
>starts first creating these resources will thus have the edge.
Agreed, if you start first you will probably have the edge (although see my
argument above about Craig Ventner and revolutionary ideas in the age of
information-based weapons systems). But if you look around, you will see
that *everyone* is starting first. The US, Europe and Japan (and perhaps
others) all have very active nano-research programs underway. There is no
way to tell who is closest to creating an assembler. The first to develop
the assembler can take over the world. How do you prevent that if you are
not the first assembler maker?
>(However this is assuming that no fundamentally new physics
>will be discovered by your enemy that you are not aware of.)
Not really worried about that. We're not talking about making things *that*
small.
-Zero
"I like dreams of the future better than the history of the past"
--Thomas Jefferson
______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:06:12 MDT