Re: If it moves, we can track it!

From: Anders Sandberg (asa@nada.kth.se)
Date: Thu Oct 17 2002 - 07:13:17 MDT


On Wed, Oct 16, 2002 at 08:07:11PM +0200, Eugen Leitl wrote:
> On Wed, 16 Oct 2002, Anders Sandberg wrote:
>
> > On the other hand, the NSA has repeatedly been critiziced for
> > gathering enormous amounts of data which nobody has any real use for.
>
> Do you think that people who invented psyops will stop using it for some
> reason? I don't trust much what is being said about intelligence's
> abilities, or lack thereof. Oh the poor, overworked, incompetent dearies.
> Right.

So they are running psyops against their own funding agencies and the
politicians who give them work to convince them that they are not doing
a great job? Plenty of the post-911 criticism *within* the intelligence
community has been about the lack of coordination and ability to combine
disparate data into something useful, rather than suggestions that the
real need is for even better data acquisiton. If you read the comments
from various groups on this subject, the political sphere seems eager to
grant new listening powers and new data gathering projects (obvious
political gains, since they "are doing something"), while many in the
intelligence community try to get ways of managing the flood of nearly
useless data.

You can always ask Harvey, I think he knows.

> It is hard not to see funny patterns in design and especially design
> omissions of officially approved and officially suggested cryptosystems (a
> way where you cannot let your hand not show) if you watch them for a few
> years. I think it's better to titrate paranoia at a tolerable tradeoff
> between cheerful disregard and hiding in the bunker hugging your assault
> rifle, and assume your adversary is efficient, intelligent, and puts its
> budget to reasonably good use, adding estimates of what the classified
> capabilities could be.

The NSA cryptographers know what they are doing. But that is just the
glamorous side of the agency. After all, most information is plaintext
anyway. If you can set up an Echelon you can get any amount of
phonecalls, data packets and radio you want. But the next step, turning
the data into information (and then into knowledge), is far harder and
not easily mechanicanizable. Doing automatic voice keyword recognition
seems to be done, and a bit of context analysis will bring down the
flood to just a big stream of signals containing interesting things. But
what to do with that? Intelligence analysis is tricky, and even the NSA
doesn't have unlimited analysts. Distinguishing people talking about 911,
roleplaying games and other noise sources from real signals is hard,
especially if you don't have any context. In most cases agencies focus on
likely suspects and groups, simply because they would not have resources
to deal with more.

[BOTE calc: Assume ~1 billion people being scanned, making one phonecall
or email each per day. Every thousandth conversation is suspicious in
one way or another and is automatically detected. That is one million
conversations per day that has to be looked at, if only to throw them
away. If it takes one minute to make this decision and there is a 7 hour
workday, we need 24,000 analysts just for this initial sorting. Actually
thinking about the meaning and implications likely requires far more.]

Just as trickery with cryptosystems gives hints (sometimes long after),
so does intelligence failures. The 911 failure has been brought fairly
clearly into the open due to political infighting between agencies, and
the problem was obviously not lack of information but lack of
correlation. Now, this could be a big psyop from the NSA and the others
to fool the enemies of the US to believe that the intelligence community
isn't as good as it really is (at a small price of a few thousand
people, enormous costs and inevitable political attacks on the agencies
themselves). That seems rather unlikely compared to the possibility that
accidents do happen and that the net does have sizeable holes.

> > techniques will produce enormous amounts of suspicious patterns that
>
> Suspicious patterns are also just information, which can be stored. You
> don't/can't act on all of it. If you recrunch more of the data with the
> same or the same data with more sophisticated algorithms you can reduce
> the amount of noise. Any spook hates one thing: losing bits.

Sure. But the problem here is where to put the sensitivity. If you flag
if patterns have reached suspicion level X, you will get both type I and
II errors. X is likely set to the level so that the amount of work
needed to be done equals the number of available analysts A. But if we
assume suspicion to have a power law distribution (everything seems to
have it :-) with exponent alpha, the level would scale as
A^(-1/(alpha-1)). If the distribution is wide enough (alpha near 1) X
would have to be ridiculously large. Worse, if we assume the level of
suspicion to be an actual indicator of likeliehood of evildoing, the
risk associated with the "subliminal" suspicion diverges as alpha
approaches 2 even as this allows a more efficient scanning for the
analysts. OK, a somewhat BOTE example not directly tied to real
numbers, but I think it shows that just piling up data doesn't really
help.

> > Besides, would systems like the one that started this thread even be
> > helpful against a sniper? Tracking bullets seems to require a higher
>
> This isn't about snipers, or terrorists. They're just useful idiots. If
> they weren't there one would be forced to invent them. It doesn't matter
> who's responsible for 9/11 or blowing up these dancing club visitors in
> Bali, the only thing which matters is what will this be used to motivate.

Bureaucracy of all kinds always expands to fill up available resources;
events like this enable an inflow of resources.

> > I see the temptation in the system, and the false security it could
> > create. But the important thing for all privacy advocates is to make
> > sure that whenever proposals for employing systems like these are
> > made, clear demands for accountability and efficiency testing are
> > made. So far it is often a choice between no system or a system (where
>
> Do you know many privacy advocates? I can think of about three with a high
> profile, one of them probably has sold out.

There are far too few around. And the problem really isn't the right to
privacy (that is very much an american thing, although integrity is its
european counterpart), but that people are not comitted enough to keep
open societies open. Thank heaven for those antibodies that are out
there, but there is a real need for a more vigorous society-keeping
movement.
 
> > the pro-system side usually wins in the present climate of paranoia),
> > not a choice between no system and a monitored system.
>
> The best system is those you don't have to explain, because it's
> invisible. If you have to install accountability, you can always modify or
> deliberately let it break afterwards. In any case you'll have trouble
> inspecting something you don't know is there; you're not allowed to;
> requiring skills you don't have.
>
> There's your assymetry.

So the goal should really be to work towards more symmetric societies.
Open societies can in many ways be defined as societies where information
is more symmetric between citizens and rulers (and between ruling
groups).

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:17:38 MST