From: Eugen Leitl (eugen@leitl.org)
Date: Wed Oct 16 2002 - 12:07:11 MDT
On Wed, 16 Oct 2002, Anders Sandberg wrote:
> On the other hand, the NSA has repeatedly been critiziced for
> gathering enormous amounts of data which nobody has any real use for.
Do you think that people who invented psyops will stop using it for some
reason? I don't trust much what is being said about intelligence's
abilities, or lack thereof. Oh the poor, overworked, incompetent dearies.
Right.
It is hard not to see funny patterns in design and especially design
omissions of officially approved and officially suggested cryptosystems (a
way where you cannot let your hand not show) if you watch them for a few
years. I think it's better to titrate paranoia at a tolerable tradeoff
between cheerful disregard and hiding in the bunker hugging your assault
rifle, and assume your adversary is efficient, intelligent, and puts its
budget to reasonably good use, adding estimates of what the classified
capabilities could be.
> Sifting through the sheer mass is too much work, and automated
I do think that Echelon is for real, and that it involves OCR for faxes,
realtime and offline speech recognition for voice and pipes everything
which in clear past sophisticated AI that doesn't just look at keywords,
but also the meaning, and to use filtering and ranking for sources, and
crosscorellate traffic. Whom were you talking to, Tim May? Been browsing
StormFront, and doing really interesting Google searches, eh? How very
interesting.
I would be extremely surprised if there wasn't a database slot with my
name on it, which contains my entire traffic, voice included. I doubt that
human eyeballs have seen it, or will ever see it (unless I do something
terminally stupid), but it's sitting there, slowly being filled up.
I don't think theres a slot for everybody in the world, or even everybody
on this list, but it's technically and financially feasible. Professional
paranoia thus would require to assume that anything going over in clear
gets intercepted, analyzed, and probably stored verbatim for later
retrieval and reanalysis.
> techniques will produce enormous amounts of suspicious patterns that
Suspicious patterns are also just information, which can be stored. You
don't/can't act on all of it. If you recrunch more of the data with the
same or the same data with more sophisticated algorithms you can reduce
the amount of noise. Any spook hates one thing: losing bits.
> have to be studied by humans, mostly resulting in nothing useful.
Human eyeballs are a scarce resource, so you only let them see highest
ranking hits. Your rate of success is being fed back into ranking
algorithm design. You refine it for a few decades, using really bright
people, you get rather good results.
> Unless one assumes some magical AI a huge system like this would
You should look at some NSAs patents. And that's unclassified information,
you can assume that they can do lots better than what they're
(deliberately) letting you know.
> likely be more of a drain of resources than a 1984 nightmare. It won't
> be security through obscurity, but rather inefficiency through too
> much data acquisition.
Storage is cheap. Crunch is less so, especially intelligent crunch, but
you can reprocess the archives afterwards.
> Besides, would systems like the one that started this thread even be
> helpful against a sniper? Tracking bullets seems to require a higher
This isn't about snipers, or terrorists. They're just useful idiots. If
they weren't there one would be forced to invent them. It doesn't matter
who's responsible for 9/11 or blowing up these dancing club visitors in
Bali, the only thing which matters is what will this be used to motivate.
> resolution than I can imagine you get using microwave/radio signals
> from the existing systems. Even if you could do the tracking
> accurately, the information that the sniper was at location X at time
> Y might not be very helpful if location X is (say) the roof of a
> building filled with people where he could vanish into the crowd
> quickly. The assumption seems to be that the police would immediately
If he was dumb enough to have a cellphone along with him, and there are
records of all cellphone IDs with their cell assignments there's going to
be just one ID occuring at all sites of shootings. At some point in the
future this will be doable with blanket license plate OCR, blanket
biometrics, or other remote fingerprinting methods. Having a classifyable
total record over all moving object's trajectories is just a holy Grail,
you can do a lot with much patchier, haphazard info.
> know something was afoot and immediately appear at location X; in
> reality there are lots of delays that just makes the system helpful in
> doing the investigation - it does not solve the crime immediately.
Of course not. But everything will be used as excuse to boost
capabilities. Sell more hardware (benefits industry), less jobless
security consultants (benefits persons), show you're a politician doing
Something Good for Your Country (and your own pocket). Completely win-win.
> I see the temptation in the system, and the false security it could
> create. But the important thing for all privacy advocates is to make
> sure that whenever proposals for employing systems like these are
> made, clear demands for accountability and efficiency testing are
> made. So far it is often a choice between no system or a system (where
Do you know many privacy advocates? I can think of about three with a high
profile, one of them probably has sold out.
> the pro-system side usually wins in the present climate of paranoia),
> not a choice between no system and a monitored system.
The best system is those you don't have to explain, because it's
invisible. If you have to install accountability, you can always modify or
deliberately let it break afterwards. In any case you'll have trouble
inspecting something you don't know is there; you're not allowed to;
requiring skills you don't have.
There's your assymetry.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:17:37 MST