[p2p-research] Drone hacking

J. Andrew Rogers reality.miner at gmail.com
Tue Dec 22 02:05:33 CET 2009


On Mon, Dec 21, 2009 at 2:48 PM, Athina Karatzogianni
<athina.k at gmail.com> wrote:
> Just to get in this for a moment, since I ve been following the discussion
> for days. Andrew Rogers: the reason we are in a global financial mess is
> because we thought we could predict everything with rigid financial models,
> supercomputers and infallible systems?


The financial market mess is a classic case of regulatory capture writ
large, combined with the international homogenization of said
regulations so that damage spread freely -- regulatory monocultures
have the same weaknesses as any other monoculture. Throw in a dollop
of quid pro quo between Wall Street and the government, and the
politicization of finance generally. A completely predictable outcome
in the sense that the system was going to come unhinged eventually.

Broken regulations written for politically-connected special interests
have nothing to do with predictive financial market models. Indeed,
the predictive financial market models saw this coming and many
reputable quant hedge funds made out nicely. That the current
administration appointed the person responsible for the oversight of
Madoff to be the head of the SEC tells you everything you need to
know.


> Complex and chaotic systems are
> extremely difficult to predict. We dont live in mediocristan, but in
> extremistan as Taleb amusingly informs us. That is common knowldge I should
> think. Besides all the American utopian technology crap of the 1950s, what
> makes you really think that 'a computer will still be able to predict and
> manipulate your behavior below the threshold of your ability to detect it' ?
> Hmmmmmmm, we dont live in a comic book and writing about life like we do is
> charming but really not helpful.......


There is literature on this, and countless supporting empirical
results that span half a century. This result is required
theoretically (elementary mathematics) and has been demonstrated in
numerous practical experiments.  It is deployed in real systems with
reliable and measurable results across every part of the population.
It is not very interesting at a basic "does it work" level primarily
because it is old science replicated myriad times.  The interesting
part is the long-term ramifications and implications of this fact.


To expand on this point:

There is a theoretical measure for the relative complexity of a
machine.  Ignoring the minutiae, the measure does not distinguish
between two cases: when the machine being measured is *vastly* more
complex than the machine doing the measuring, and when the machine
being measured is *infinitely* complex. These produce the same result
for the measure, but people care about the difference.

Human behavior and cognitive processes, no matter which way you
measure them, have always been measurable as being driven by fairly
unremarkable finite state machines from the perspective of vanilla
silicon. Not even a vastly more complex one. Per another basic theorem
we are incapable of predicting our own behavior, hence why it feels
like we are not predictable even when you can use a computer to
reliably show just how predictable we actually are.

(As an aside, Taleb is frequently invoked far outside of its
legitimate scope, as is the case here. We are basically discussing the
unsimplified theoretical context from which Taleb's argument is
derived.)



Ignoring the future technology issues, this raises all sorts of
difficult questions since it violates a tacit axiom of most societies.
How long can we maintain a social fiction if it is increasingly
exploitable?  This is a conundrum which, like the rapidly diminishing
real value of human labor, does not have an easy or pleasant answer.


-- 
J. Andrew Rogers
realityminer.blogspot.com



More information about the p2presearch mailing list