From: Harvey Newstrom (mail@HarveyNewstrom.com)
Date: Tue Jul 16 2002 - 16:43:01 MDT
On Tuesday, July 16, 2002, at 04:30 pm, Rafal Smigrodzki wrote:
> Harvey Newstrom wrote:
>
> If you don't secure every transaction, you are merely playing a
> gambling game with statistics.
>
> ### Life is a gamble with statistics.
I don't like gambling with my life, and prefer to get rid of such
gambles wherever possible. This means I would like to see all bags
scanned in airport security. Scanning a subset of bags is a sign of
decreased security because the security system can't keep up. If we fix
the security systems so that we could scan all bags, this would be a
sign of increased security. Merely leaving the system broken because
"life is a gamble" sounds more like an admission of defeat rather than a
solution to me.
> This is not real security, and not likely to prevent attacks.
>
> ### Real security does not exist and it is impossible to prevent all
> attacks.
Real security does exist, or else why try? You probably mean perfect
security doesn't exist. This is true. But again, just because it will
never be perfect is not a good argument for accepting weak security. As
I explained above, the statistical approach merely demonstrates that our
security systems can't cover the load. It is an admission of failure
rather than a valid approach to security.
> Even if it works 99% of the time, the
> attackers just keep retrying until they get through.
>
> ### If it works 99% of the time, it means 99 out of 100 bad guys get
> burned - this is likely to reduce their numbers substantially over time,
> especially measured over evolutionary timeframes.
Get real. Nobody wants to slowly increase security over evolutionary
timeframes. We want personal security for ourselves now, not a slight
average increase in security for future generations. Besides, if one
guy gets through at an airport for every 99 who are caught, and that bad
guy downs a plane with more than 99 people on it, then we are losing
more people than we catch. This is not a very good statistic at all.
When one plays with statistical methods for security, one provides the
attackers with statistical basics for success. They could send 100
people out to attack various airports on a single day. Ninety-nine
might get caught while one gets through. Such a system has predictable
methods and rates for circumvention. A better method would be screening
all bags with no specific bags that are allowed through without
screening. It would hopefully have a better success rate, but at least
there would be no "guaranteed" successes. We attempt to secure every
bag without skipping any. Any failures would indeed be "failures" and
not "acceptable losses" budgeted into the system. Such failures could
be resolved and hopefully prevented next time. Such a system could keep
growing in reliability, whereas the statistical system is designed to
fall short the first time and stay deficient at the same rate of failure
forever.
> ### "Trustworthiness" is
> intimately related to a long rack record and a stable personality, which
> evolved because it benefited both the carrier of the requisite genes and
> memes, and the societies with the right conditions for its development.
I never meant to imply that "trustworthiness" is bad. However, no
business should "trust" people to the extent of not having written
contracts. No customer should "trust" a cashier to the extent of not
counting their change. No person should "trust" the neighbors to the
point of leaving their doors unlocked. Basic security trumps "trust"
any day.
Statistical security is also bogus. Put all contracts in writing, not
just the iffy ones. Lock all the doors, not just the front door.
Count all your change, not just when the cashier seems suspicious. The
only argument for incomplete security is when the security system is
inadequate.
How do you know if someone has a long track record and a stable
personality, unless you have double-checked them already? If you don't
count our change, how do you know which cashier is trustworthy? It is
only be verifying security in previous transactions that you can afford
to "trust" someone without verification this time. Therefore, this
"blind" trust requires prior verification before it is given. It makes
no sense to argue "blind trust" in favor of verification, because the
former requires the latter as a foundation.
> saves money rather than costing money. Blocking a single 9/11 magnitude
> incident would pay for a lot of security.
>
> ### Trying to block a surprise attack on an arbitrary target using
> unknown
> methods by unspecified forces would bankrupt any economy and turn any
> political system into a nightmare.
Yes, there will always be some level of security that costs more than
the loss itself would have cost. But the more security infractions are
blocked, the more costs are saves such that security pays for itself.
In the case of the 9/11 incident, I think that a lot of low-level
security, such as monitoring visas, better communications between the
CIA and FBI, real-time tracking of airplanes, better contingency plans,
and the like would have prevented this much more efficiently than some
megascale engineering project providing perfect protection. I argue not
for a mega-solution, but for a lot of smaller more complete solutions.
If we didn't have a lot of little failures or a dozen lower checks that
failed or a thousand little details that "fell through the cracks", we
would have more complete security.
> There is always a complicated and uncertain trade-off between security
> and
> efficiency, and denigrating the concept of trust ignores this fact.
I don't denigrate the concept of trust. But security is not about
"trust". I don't want to board a plane and "trust" the other passengers
not to kill me. This model plainly does not work. True "trust" is when
I "know" (beyond a reasonable doubt) that the other passengers aren't
going to kill me. I rather prevent strangers from being terrorists
rather than trust strangers not to be terrorists.
What exactly do you argue as security procedures in your "trust" model?
How would you modify airport security, for example, to put more "trust"
in the system? How does "trusting" people make anything more secure?
-- Harvey Newstrom, CISSP <www.HarveyNewstrom.com> Principal Security Consultant <www.Newstaff.com>
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:15:29 MST