[p2p-research] Drone hacking

Andy Robinson ldxar1 at gmail.com
Wed Dec 23 16:58:05 CET 2009


"What it boils down to is that you are too lazy to even attempt to
learn the math"

Well, aren't you a nice person.

I thought none of us could learn the mathS (mathematics is a plural word not
singular) because it was too complicated for our inferior meaty brains.

What's really bad is that you pretend to be able to speak about human
motivations and social action without any significant knowledge of
psychology, sociology, anthropology or cultural studies.  I wouldn't
attribute it to laziness so much as arrogance.  It'd be like me pretending
to be able to do calculus by applying Negri.

There's a lot of other things I've never bothered attempting to learn
either, a number of them for lack of time, but a great many because I don't
see them as useful for the kind of problems I try to solve.  I don't know
the ins and outs of medieval theology, Soviet Lamarckian agricultural
theories or phrenology, but I'm quite capable of telling anyone who happened
to believe in them why they're utterly irrelevant to my research interests
and quite probably bollocks into the bargain.  The onus is on you to MAKE me
interested by showing how what you think I should study is central to what
I'm trying to find out.  You have singularly FAILED to do this, and in fact
made me LESS inclined to check it out because you have shown your utter
incapacity to address even the most basic problems which anyone who knows
what they're talking about in the fields dealing with human social life
would raise.  This is not laziness, but sensible time management.

"you are also dissatisfied with my attempt to dumb it down"

If that was dumbed down then I don't fancy your chances ever teaching
anything.  Every one of your posts contained undefined technical
terminology, such as 'autonomous and algorithmic in a pure sense', 'ambient
entropy', 'finite state machines', 'cryptographic lack of predictability',
etc etc.

On the occasions where I was concerned enough that you might mean something
interesting to actually go looking - as in the case of your deus ex machina
"pervasive latent and intentional sensor networks" - the concept either
didn't turn up on Google or only turned up in contexts (such as weather
readings) which are not relevant.  In that case I eventually figured, after
a considerable time on Google, that you probably meant such mundane things
as mobile phone tracking, RFID chips in products, CCTV footage, and spy
satellites (if this is indeed what you meant - the fact that nobody else
uses the concept of 'pervasive latent and intentional sensor networks' makes
it rather hard to confirm).  Shortly after which, I figured out very quickly
that barring revolutionary breakthroughs, none of these technologies have
anything like the capacity to generate reliable personalised data in
marginal settings.  Now, would it have been so difficult to say, "they could
do things like tracking nomads on their mobile phones", or "they could zoom
in on the village with a next generation spy satellite"?  Or to have said,
"pervasive latent and intentional sensor networks *such as *spy satellites
and mobile phone tracking"?  Or to have given us a link to an article about
pervasive latent and intentional sensor networks?

Oh... but then we would have realised that you really meant something quite
widely known about and clearly inadequate for the task you are assigning to
it.

"We were already at the point where almost every statement came
with a list of implied theoretical caveats the length of my arm."

No, we started with simple terms like "predict", "certain" etc and then had
theoretical caveats added every time someone picked holes in them.  It's a
classic "motte-and-bailey" argument - a defensible but uninteresting
argument (often a truism) standing inside a castle and acting as the basis
for expansion into the surrounding fields.  When the argument in the fields
comes under attack, it retreats back into the castle.

"The deep-seated and pathological need for the security blanket of
simple certainty is one of the less endearing traits of the human
species."

The deep-seated and pathological need for the security blanket of simple
certainty is absolutely irrelevant to all of the arguments which have been
made against your position.  It is a clumsy attempt on your part to infer
motives to others so as to convince yourself or others that perfectly solid
arguments and evidence against your position, not to mention its widespread
rejection, are no threat to its validity.  If you persist in arguing in this
way, don't be surprised when other people start questioning *your* motives.

Actually I couldn't give a flying fuck whether people's actions are
basically 'determined' or 'predictable' on some level, or whether absolute
certainty is achievable.  The only emotional stake I have in this is that I
don't want control-freaks in governments and businesses, or rogue
supercomputers, imposing totalitarian social control by predicting and
manipulating people, *or* making massively false assumptions based on
quantitative models which have real effects in terms of intrusiveness,
violence, persecution, etc.  So, yes, I'm rather anxious about the
information leaking through from websites, traffic monitoring, mobile phone
tracking, RFIDs, 'intelligent' CCTV cameras and the rest.  Not for
philosophical reasons but because of the political uses to which they could
be put.

You on the other hand don't seem to worry about this at all, but simply to
be blithely accepting that the loss of existential freedom and the
imposition of total control are prices to be paid for progress and for the
final loss of what you take to be fallacious humanist ideologies.

>From your very first posts on this thread, you are absolutely obsessed with
the view that anything that goes against your argument is simply a front for
an assumption of basic existential uniqueness and unpredictability.  This is
your own little hobby-horse.  It's your way of straw-manning opposition by
hegemonising it with the figure of a privileged Other.  Ultimately, I don't
really care if it turns out that, if all the huge number of variables could
be calculated and factored in, people are each absolutely determined and
hence (in principle) absolutely predictable.  But what I *do* care about is
people thinking they can capture a limited cross-section of data however
massive, tally this up into a parody of the real individual (ignoring
whatever has not been captured), and then act as if the imperfect
predictions they can make (85% or whatever) amount to reliable prediction
and total capture of the entire person.  It is the old representationalist
fallacy repeating itself yet again, and every time it leads to
authoritarianism and misery.

I also think that you are absolutely certain, arrogantly certain, that what
you don't know doesn't matter, that it can be locked up in a safe corner of
statistical anomaly and have no impact either on your science, its
applicability and effectiveness, or your dubious philosophical assumptions.
You can safely ignore the 15% of the time that the supercomputer will
predict wrongly or be unable to predict, or the 15% of people it can't
predict, the 15% of people shot dead in the street who aren't about to
commit a murder, the 15% of suicide bombings it isn't able to prevent.  This
unpredictable and nondenumerable excess (let us call it the Real, in a
Lacanian sense) constantly haunts your theory, but you keep it in a safe
little cell by rendering it as a statistical probability without ever
determining what causes it.  And ultimately it returns and shatters your
carefully built scientific models and the systems of social control built on
their basis, because a string of unique... sorry, 'statistically improbable'
circumstances produce something the system can't handle.

And the sooner the better, because up to that point the 15% who are being
responded to in inappropriate ways, who are being viewed as criminals when
they're not, who are quite probably being classified as *risky
individuals*because they are less predictable than others (and this
*is* happening in
British law and is implicit in the intelligent CCTV project), are being
deeply oppressed by a social system which reduces them from a living being
to a statistical residue and relates much more violently and problematically
to them than to the 85% of people who do what's expected.  (Or the same
people, who act unpredictably 15% of the time, and are persecuted or
unrecognised on these occasions - it makes little difference which).

Of course it is also an issue for your claim that human beings aren't unique
or unpredictable or 'like snowflakes', since they're rendered 15% (or 1%, or
0.001% it barely matters) unpredictable.

Let us remember where this started.

*"You can exploit and leverage decentralized networks in ways that
are far more subtle and difficult to detect than centralized ones --
they have their own weaknesses."*
"When the main tools of war become vast supercomputers and very
advanced theoretical mathematics, it won't favor the scrappy "freedom
fighter". It is much easier to buy an AK47 or build an IED than to
create a competitive (and survivable) analytical supercomputing
infrastructure that can play this particular game of chess."
*"Hollywood movies aside, a scrappy band of misfits would get eaten
alive by a professional organization with deep pockets."*
"You are underestimating how predictable individuals and social
networks are for all practical purposes."
*"Throw lots and lots of raw data into the system
and let the mathematics deal with the useful pattern extraction --
they are much more reliable than humans at discerning subtle
relationships."*
"No one is predicting the behavior of some standardized
"average" individual, they are predicting the behavior of *you* in a
specific context."
*"This has nothing whatsoever to do with being "in denial about the
*
*importance of culture and social construction in human action".
*
*Indeed, it quantifies the importance far better than anyone is ever
likely to be comfortable with."*

All very big and specific claims in *my* fields of specialism rather than
yours (asymmetrical warfare and cultural difference), made with little
knowledge of what I was even talking about at the time.  Here are the basic
claims:  '*you*' (presumably American-speak for each and every person and
not simply the addressee) is predictable 'for all practical purposes',
specifically including asymmetrical warfare; culture and social life are
similarly predictable and quantifiable; decentralised networks can be easily
manipulated.

None of this hinges on your hobby-horse about whether people are ultimately
determined or in principle predictable, it hinges on whether the basic
difficulties of large hierarchical organisations attempting to fight
networked, locally-knowledgeable adversaries in asymmetrical wars can
override their lack of local knowledge, cultural insensitivity and strategic
disadvantages (such as incapacity to 'pacify') by means of computerised
predictive modelling.  Since these differences themselves lead to *lack of
data*, *resistance to data gathering*, *marginality* in relation to easily
monitored data-trails, *disguise* in relation to these trails, and *culturally
variable and largely unknown* forms of networking and use which are not
simply patterns but sets of meanings, *it matters not a jot if these people
are ultimately determined and predictable, because sufficient data will not
be available, what is available will not be reliable, and what is observed
will not be understood*.

Only the third of these problems could even in theory be corrected through
advances in computing, and only if computers get to the point where they can
write works of anthropology and sociology unaided.  At present computers
cannot even write works of *physics* unaided.  Now, if ever we get to the
point where computers can write anthropology better than humans, again this
is no great existential problem for me - I'll be sad to see a certain little
academic subculture die out, I'll be sad if friends lose jobs, and I'll be a
little anxious about what the computers might do if they get an undetected
virus, but I'll also be excited to see things humans have been struggling to
figure out finally make sense, and glad if the effects of cultural
insensitivity are mitigated as a result.  But if such robo-androids ever
come into being, firstly they will not be thinking like better-than-human
mathematicians but like better-than-human researchers in the appropriate
fields, secondly this is a *long* way off and does not seem prefigured at
all by the kind of thing you're talking about, and thirdly it still would
not mean that large hierarchical organisations would lose all their other
disadvantages - in fact, they would probably ignore the robo-anthropologists
much the same way they ignore the human ones.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listcultures.org/pipermail/p2presearch_listcultures.org/attachments/20091223/7fc3b0ca/attachment.html>


More information about the p2presearch mailing list