From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Wed Sep 27 2000 - 13:59:15 MDT
J. R. Molloy writes:
> This seems to argue in favor of cyborg technology, if we don't want to trust AI
> to be friendly.
> Augmented Cyborgs could monitor friendly space to detect runaway evolutionary
> algorithms.
We've been down that avenue before. You still ignore the fulminant
dynamics of the positive autofeedback enhancement loop. Biological
components will be making cyborgized people slow, and given the
engineering difficulties, it will take too much time even to develop
nontrivial implants (we might be hit way before, though probably not
the day after tomorrow, or even a decade or two after that). Rogue AIs
running rampant in the global network of the future can suddenly
utilize the hitherto severely underexploited (due to limitations in
the notoriously pathetic state of the art of human programming)
potential of said network, and will be climbing up the evolutionary
ladder quickly, in leaps and bounds, both due to co-evolution
competition dynamics and external threats (people attempting to shut
them down). By the time the best of them have advanced slightly beyond
the human level they're no longer just in the network, having broken
out and developed god knows what hardware. Before that they've
probably removed all relevant threats, probably killing off all
people, just to be on the safe side. (At least I would do it that way,
if I was in their place (i.e. a threatened nonhuman with no
evolutionary empathy luggage towards the adversary)).
Your reasoning is based on a slow, soft Singularity, where both the
machines and humans converge, eventually resulting in an amalgamation,
advancing slowly enough so that virtually everybody can follow. While
it may happen that way, I don't think it to be likely. I would like to
hear some convincing arguments as to why you think I'm mistaken.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:15 MST