[p2p-research] Drone hacking

Michel Bauwens michelsub2004 at gmail.com
Tue Dec 22 05:36:27 CET 2009


I had a talk with cognitive neuro-scientist Sarah Van Gelder in TEDx last
month,

I asked her what she thought about the theses expounded by J. Andrew and
given faith to by Ryan, "the brain is a machine, computer" and all
individuals are totally predictable, given enough knowledge of math and
computers.

Her answer: "oh my god, those naive ideas have been abandoned at least since
the 80's, no serious cognitive scientist would adhere to them"

On Tue, Dec 22, 2009 at 8:05 AM, J. Andrew Rogers
<reality.miner at gmail.com>wrote:

> On Mon, Dec 21, 2009 at 2:48 PM, Athina Karatzogianni
> <athina.k at gmail.com> wrote:
> > Just to get in this for a moment, since I ve been following the
> discussion
> > for days. Andrew Rogers: the reason we are in a global financial mess is
> > because we thought we could predict everything with rigid financial
> models,
> > supercomputers and infallible systems?
>
>
> The financial market mess is a classic case of regulatory capture writ
> large, combined with the international homogenization of said
> regulations so that damage spread freely -- regulatory monocultures
> have the same weaknesses as any other monoculture. Throw in a dollop
> of quid pro quo between Wall Street and the government, and the
> politicization of finance generally. A completely predictable outcome
> in the sense that the system was going to come unhinged eventually.
>
> Broken regulations written for politically-connected special interests
> have nothing to do with predictive financial market models. Indeed,
> the predictive financial market models saw this coming and many
> reputable quant hedge funds made out nicely. That the current
> administration appointed the person responsible for the oversight of
> Madoff to be the head of the SEC tells you everything you need to
> know.
>
>
> > Complex and chaotic systems are
> > extremely difficult to predict. We dont live in mediocristan, but in
> > extremistan as Taleb amusingly informs us. That is common knowldge I
> should
> > think. Besides all the American utopian technology crap of the 1950s,
> what
> > makes you really think that 'a computer will still be able to predict and
> > manipulate your behavior below the threshold of your ability to detect
> it' ?
> > Hmmmmmmm, we dont live in a comic book and writing about life like we do
> is
> > charming but really not helpful.......
>
>
> There is literature on this, and countless supporting empirical
> results that span half a century. This result is required
> theoretically (elementary mathematics) and has been demonstrated in
> numerous practical experiments.  It is deployed in real systems with
> reliable and measurable results across every part of the population.
> It is not very interesting at a basic "does it work" level primarily
> because it is old science replicated myriad times.  The interesting
> part is the long-term ramifications and implications of this fact.
>
>
> To expand on this point:
>
> There is a theoretical measure for the relative complexity of a
> machine.  Ignoring the minutiae, the measure does not distinguish
> between two cases: when the machine being measured is *vastly* more
> complex than the machine doing the measuring, and when the machine
> being measured is *infinitely* complex. These produce the same result
> for the measure, but people care about the difference.
>
> Human behavior and cognitive processes, no matter which way you
> measure them, have always been measurable as being driven by fairly
> unremarkable finite state machines from the perspective of vanilla
> silicon. Not even a vastly more complex one. Per another basic theorem
> we are incapable of predicting our own behavior, hence why it feels
> like we are not predictable even when you can use a computer to
> reliably show just how predictable we actually are.
>
> (As an aside, Taleb is frequently invoked far outside of its
> legitimate scope, as is the case here. We are basically discussing the
> unsimplified theoretical context from which Taleb's argument is
> derived.)
>
>
>
> Ignoring the future technology issues, this raises all sorts of
> difficult questions since it violates a tacit axiom of most societies.
> How long can we maintain a social fiction if it is increasingly
> exploitable?  This is a conundrum which, like the rapidly diminishing
> real value of human labor, does not have an easy or pleasant answer.
>
>
> --
> J. Andrew Rogers
> realityminer.blogspot.com
>
> _______________________________________________
> p2presearch mailing list
> p2presearch at listcultures.org
> http://listcultures.org/mailman/listinfo/p2presearch_listcultures.org
>



-- 
Work: http://en.wikipedia.org/wiki/Dhurakij_Pundit_University - Think thank:
http://www.asianforesightinstitute.org/index.php/eng/The-AFI

P2P Foundation: http://p2pfoundation.net  - http://blog.p2pfoundation.net

Connect: http://p2pfoundation.ning.com; Discuss:
http://listcultures.org/mailman/listinfo/p2presearch_listcultures.org

Updates: http://del.icio.us/mbauwens; http://friendfeed.com/mbauwens;
http://twitter.com/mbauwens; http://www.facebook.com/mbauwens
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listcultures.org/pipermail/p2presearch_listcultures.org/attachments/20091222/a240a462/attachment.html>


More information about the p2presearch mailing list