From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Aug 18 2002 - 08:35:11 MDT
Anders Sandberg wrote:
>
> OK, I perhaps should explain a bit more clearly why I would trust
> layered but imperfect defenses (although Bruce Schneier does a better
> job at that).
>
> Imagine the enemy having to overcome one super defense (forcefields, SI,
> whatever) to wreak havoc. No defense is ever perfect, so there is a
> finite probability P of it failing. The cost of a perfect defense is
> also enormous and grows fast; I would expect it to scale at least as
> 1/P. A 99.9% safe defense would be ten times as expensive as a 99% safe
> system.
>
> Compare this super-defense with having N layers of less powerful
> defenses, each with a probability Q of failing. If the threat got through
> defense 1, it would still have a 1-Q chance of being caught in the next.
> The risk of it passing all the layers is Q^N, which shrinks very fast as
> N increases. The total cost would be N/Q. So if we have ten layers of 90%
> safe defenses the risk of failure is 10^-10 and the cost is 100. A single
> super-defense with the same risk would cost 10^10.
This is a rather specious argument, Anders. As far as I can tell the
entire reasoning consists of calling a multicomponent shielding system
proposed by humans a "multi-layer" defense; while any system proposed by
an SI, no matter how intricately optimized to maximize safety while
minimizing cost, counts as a "single-layer" defense. Why? Because you,
as a human, don't comprehend what the layers are and how they work, and in
fact the optimal solution doesn't consist of these chunky discrete "layer"
things at all, so to you it's a "single thing". Your human-level
solution, of course, has modularity that you can understand, even though
it's charmingly inadequate as a system design, so you consider that to be
"multi-layered" and hence fitting your heuristic "don't put all your eggs
in one basket", even though your system has far sparser coverage... being
squarely in the tiny "human basket".
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:16:12 MST