[p2p-research] Drone hacking
J. Andrew Rogers
reality.miner at gmail.com
Mon Dec 21 19:39:08 CET 2009
On Mon, Dec 21, 2009 at 12:44 AM, Andy Robinson <ldxar1 at gmail.com> wrote:
> On the supercomputer issue... I think we're getting into sci-fi here, if
> it's expected that supercomputers will really be able to deal with all the
> complexities of warfare in poorly-understood postcolonial settings.
The reality is more "sci-fi" than your sci-fi scenario. It is not
about computerized command and control. It is about doing deep
predictive behavioral modeling and scaling that up to the level of a
society or country, understanding the underlying patterns of
individuals and social networks at a level of detail not even
available to the individuals themselves. Forget warfare, that is a
level of abstraction too far and a blunt instrument.
> But
> supposing they are - the precipitate rate of decline of computer processing
> capacity would doubtless see one of these supercomputers in most homes
> within a few decades at most of them being invented. And if not - people
> would quickly find ways to combine processing capacities of smaller
> computers.
First, you can't just "combine processing capacities", the computer
science doesn't work that way. Second, even if you magically could
combine processing capacities, you are still missing the rarified
theory, computer science, and mathematics that would make it useful
for this purpose. Third, even if you could magically combine
processing capacities *and* magically build the requisite world-class
theoretical and applied computer science organization, you are still
missing the epic quantities of data that would make it all useful.
Hollywood movies aside, a scrappy band of misfits would get eaten
alive by a professional organization with deep pockets. It is
asymmetric warfare, but not in the way you normally mean.
> This is not going to happen for two reasons. Firstly because strategic
> planners programming data into the computers are not going to be able to
> gather sufficiently precise information on how the motivations of
> adversaries separated by radical difference are formed and expressed - they
> have trouble even understanding the most basic facts about cultures
> different from their own.
Your concept of how this works is incorrect. It is autonomous and
algorithmic in a very pure sense, the assumptions of the programmer
play no role in the capabilities of this type of system. The idea that
you have people entering rules or some such is a charmingly quaint
1980s view.
In fact, this reflects what I was talking about with regard to being
able to build a theoretical mathematics and computer science
organization. Understanding the nature and scope of the task is not a
trivial matter; your average computer scientist would be completely
lost.
> Secondly because the variations are inherently
> unpredictable - there is a level of individual difference in future
> reactions which, while doubtless causal on some level, is not predictable
> from observable factors which could be programmed into the computer.
You are underestimating how predictable individuals and social
networks are for all practical purposes.
> It would only need the addition of a 'perverse' motivation to take
> control completely, manipulating its programmers.
Non sequitur, there is nothing AI about this capability. Societies are
susceptible to simple algorithmic induction, and people are not the
unique snowflakes they think they are. It is vaguely on the path to AI
theoretically, but in the way the Wright Brothers were on the path to
the SR-71.
> In any case, I don't think we need to worry. The ignorance American
> military planners show about local cultures, motives and circumstances would
> carry into their demands on computer systems, and they would not even try to
> model such things. Doubtless their supercomputer would be programmed on
> fashionable American models such as rational choice theory, and would end up
> about as smart as those targeted ad programmes which think I want to go on
> holiday in a country whenever I research about human rights there.
This is simply naivete on your part, you don't understand the basic
nature of the technology. State-of-the-art sales targeting that is
based on primitive versions of this kind of technology are invisible,
which is the entire point. The only thing that has kept it from
becoming ubiquitous is that the underlying mathematics is pretty new,
few people really understand it well enough to do something with it,
and the traditional implementation algorithms do not scale or
parallelize well enough to allow it to be broadly exploited.
If Google -- a system only suited to the simplest types of analysis --
was employing the type of technology we are talking about here, you
would not be aware of it.
--
J. Andrew Rogers
realityminer.blogspot.com
More information about the p2presearch
mailing list