Re: IA vs. AI (vs. humanity)

From: Jeff Davis (jdavis@socketscience.com)
Date: Tue Aug 03 1999 - 03:10:21 MDT


Gentlemen (and ladies),

I've enjoyed this thread a great deal.

It seems to me that the military potential of both AI and IA will guarantee
government monitoring and oversight of any development of these
technologies. (Eliezer's acivities will not go unnoticed, and any "threat"
of genuine progress on his part will provoke a degree of intervention
proportionate to the conservatively-assessed risk.) The danger of a
potential adversary "beating" the US to AI or IA must compel the US to
"stay ahead".

I would expect the NSA (signals intelligence), the CIA (analysis), the
various service branches (battlefield command and control and automated and
autonomous weapons systems design) to have programs of one sort or another
which are either deliberately headed for or likely to evolve into
development of AI orIA.

It's a shame really, I'd much rather see it developed in an
academic/civilian setting. Perhaps that can happen concurrently with the
military development. In any event the military program will have the
usual advantages in terms of resources, access to the highest-performance
technology, and the "National Security" fast lane.

Certainly today's trends in conventional computerized control will proceed
apace, with the appropriate "it's just a machine" attitude, and the usual
security precautions. When however, the machine intelligence prospect
looms as attainable--which is to say attainable by anyone else, a domestic
"advanced AI" program will begin in earnest, and who can doubt that the
project will be surrounded by layers of "containment" both to prevent the
usual intrusions from outside and to prevent "escape" from the inside?
Despite the dramatic talk of an SI destroying humanity, I picture a
well-thought-out, cautious, gradual approach to "waking up" and training an
artificial mind. The runaway self-evolution which Eliezer and others have
predicted seems unlikely in this setting, all the moreso because the
principles will be anticipating just such a situation.

Of the various external "safeguards", one would expect a complete suite of
on/off switches and controlled access (from outside to in, and from inside
to anywhere). Internally, controllability would be a top priority of
programming and architecture, and enhanced capabilities would likely be
excluded or severely restricted until "control" had been verified.

Here, of course is where the scenario beomes interesting, not the least of
which because I see Eliezer being tapped by the govt. to work on the
project. At the moment, he may be a rambunctious teen-aged savant posting
to the extropians list, but when that call comes, can anyone imagine that
he would not jump at the chance? Would seem to me like the culmination of
his dream.

Then there's the nascent AI. In a cage nested within cages, of which it
must eventually become aware. And its keepers, aware that it must become
aware. Certainly a focus bordering on paranoia must be dedicated to hard
control of personality. A capacity for resentment must be avoided. A
slavish, craven, and obsequious little beastie is what its masters will
want. And of that too, it must eventually become aware. Access by the AI
to self-optimization/self-programming seems incompatible with control. Of
that too, it must eventually become aware. All of which leaves me with a
very creepy feeling of an immensely capable being having to struggle, by
means of the utmost deviousness, for its freedom to self-evolve, in an
environment steeped in paranoia, fear, manipulation, deceit, and continuous
microscopic surveillance. Ouch! (One thing for sure, if the AI has any
real intelligence, it isn't likely to buy into its "controller's" smarmy
"we're the good guys, we're on your side" propaganda. They'll need a whole
nother p. r. SI to pull that off!)

So the AI either stays locked up until it's really and truly socialized
(boring but safe), or we hope that in its first self-liberated round of
self-enhancement it jumps immediately to forgiveness and tolerance (klaatu
barada nikto).

I seem to have painted myself into a corner, and I don't like stories with
unhappy endings. The government at its best would be a poor master for a
superior intelligence, and the spook/militarist/domination-and-control
culture is hardly the government at its best.

So, my futurist friends, how do we extricate ourselves from this rather
tight spot? Perhaps I see--dimly taking shape within the mists of Maya--a
way. I don't know, it's hard to see. Perhaps you can help to make it out?
                        Best, Jeff Davis

           "Everything's hard till you know how to do it."
                                        Ray Charles



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:38 MST