RE: Revolting AI

From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Sun Mar 10 2002 - 04:33:38 MST


On Fri, 8 Mar 2002, Colin Hales wrote:

>
>
> Eugene Leitl wrote........
> > ........botton of this gravity well, we're extremely vulnerable.
> > There's a number of things which need to be done to make us less
> > vulnerable, some of them straightforward, some less so. Some
> > of them are protective (enhance people) some are offensive (addressing
> > the scenarios of emergence/creation of AI).
> > I can give you a list.
> >
>
> yes pls!

A few things off the top of my head (this is what a transhumanist think
tank is supposed to be paid to be doing).

* no evolutionary algorithm AI experiments unless air gapped and following
  a containment SOP. Do not reconnect or reuse the components outside
  of containment until wiped clean (again following through a SOP, at
  very least power down and do a full state wipe). Continuously revise
  SOPs.

* harden the network by using provably secure protocols and introduce
  adaptive diversity into "code" via ALife route. Encourage h4x0rs to do
  their worst. Pay $$$ bounty to those who can break the most. Try to
  create a fully polymorphic worm with a probabilistic exploit seeker,
  and let it loose (after you've let it run in the lab, and patched up
  most of the holes). Launch big R&D program to learn to mutate machine
  code and FPGA gates. Sandbox anything above the protocol layer (try to
  make a protocol simple enough to be cast into hardware, geodetic
  routing could do this). Put watchdogged intrusion alert on top of this,
  forcing the machine back into default sane state (also for enduser
  boxes).

* introduce adaptive realtime traffic analysis and response into global
  networks (ability to sense and to response to global traffic patterns).
  Built in crypto authenticaed fragmentation firewalling and shutdown
  capabilities at an incorruptable hardware layer (power cycle or down by
  an external embedded via magic packet).

* not make an AI any smarter than a chimp or a dog, until we're good and
  ready

* track AI researchers, cluster sales, and what these clusters are used
  for

* boost uploading research (freeze/slice/scan, neural emulation)

* boost self-rep space R&D, establishing bridgehead off-planet

* once you have uploading, let a small initial group act as
  regulators, create a large scale Introdus programme. As soon as as many
  as possible are hardened maximally, remove above constraints

This gives you no absolute security, but it attempts to minimize the risks
during high vulnerability window (starting soon, ending soon if we
implement above countermeasures). I realize a lot of this is going to be
unpopular, I don't like half of it myself.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:54 MST