From: Billy Brown (bbrown@conemsco.com)
Date: Thu Jan 07 1999 - 13:32:58 MST
Anders Sandberg wrote a number of cogent objections to the 'SI apotheosis'
scenario. Rather than doing a point-by-point response, let me restate my
case in more detailed terms.
First, I don't think the super-AI scenario is inevitable, or even the most
likely one. It becomes a serious possibility only given the following
conditions:
1) Humans can build an algorthmic AI capable of writing software at a human
level of competence.
2) The first such program is not developed until hardware much faster than
the human mind is available (between 2010 and 2020, by my best guess).
3) The first such program is developed before human intelligence enhancement
gets going at a good rate.
4) It is possible for beings far more intelligent than humans to exist.
5) Higher intelligence does not require some extraordinary explosion in
computer power. Getting IQ 2X might require speed 2X, or 10X, or even 100X,
but not X^X.
5) It is possible for slow, stupid beings with greater resources to defeat a
smart, fast being. However, the resource advantage required grows
exponentially as the differences in IQ and speed increase. At some point it
simply becomes impossible - the superior intellect will outmaneuver you so
completely that there will never even be a fight.
Discard any one of these, and you no longer have a problem. However, if all
of these assumptions are true, you have a very unstable situation.
Our SI-to-be isn't some random expert system or imitate-a-human program.
Its Eliezer's seed AI, or something like it written by others. Its an AI
designed to improve its own code in an open-ended manner, with enough
flexibility to do the job as well as a good human programmer.
Now, the first increment of performance improvement is obvious - it writes
code just like a human, but it runs faster (not smarter, just faster). It
also has some advantages due to its nature - it doesn't need to look up
syntax, never has a typo or misremembers a variable name, etc. Together
these factors produce a discontinuity in innovation speed. Before the
program goes on line you have humans coding along at speed X. Afterwards
you have the AI coding at speed X^6 (or maybe X^3, or X^10 - it depends on
how fast the computers are).
At that point the AI can compress years of programming into days, hours, or
even minutes - a short enough time scale that humans can't really keep track
of what it is doing. If you shut it down at that point, you're safe - but
your average researcher isn't going to shut it down. Eliezer certainly
won't, and neither will anyone else with the same goal.
At this point the AI will figure out how to actually make effective use of
the computational resources at its disposal - using that hardware to think
smarter, instead of just faster. Exploiting a difference of several orders
of magnitude should allow the AI to rapidly move to a realm well beyond
human experience.
Now we have something with a huge effective IQ, that is optimized for
writing code and thinking about thought. Any human skill is trivial to a
mind like that - it might not be obvious, but it won't take long to invent
if it has the necessary data. From here on we're all talking about the same
scenario, so I won't repeat it again.
So, is it the assumptions you don't buy, or the reasoning based on them?
Billy Brown, MCSE+I
bbrown@conemsco.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:02:44 MST