From: Eric Watt Forste (arkuat@factory.net)
Date: Fri Aug 30 1996 - 22:36:49 MDT
At 8:04 PM 8/30/96, Dan Clemmensen wrote:
>The SI may not want to destroy humanity. Humanity may simply be
>unworthy of consideration, and get destroyed as a trivial side effect
>of some activity of the SI.
I liked your point that the SI to worry about is the SI whose primary goal
was further increase in intelligence. But consider that the human species,
as a whole, is a repository of a significant amount of computational power.
Just as most of us know to look up something up on the Web before we invest
a significant amount of effort into trying to figure it out for ourselves,
from scratch, just using our brains and nothing else, the SI will probably
be a lot more intelligent (in terms of being able to figure out solutions
to problems) if it cooperates with the existing body of human researchers
than if it eats them.
(As usual in discussions of Robots That Will Want to Eat Us, I encourage
all participants in the discussion to study the economic principle of
Comparative Advantage and make sure they understand not only how it applies
to trade between nations but also how it applies to trade between
individuals.)
Intelligence makes a silly goal when considered in splendid isolation.
Intelligence is a means to diverse ends. The SI-boogie-man that you conjure
up is the ultimate idolater... it not only has a primary goal of
intelligence increase, it also seems to want increased intelligence at the
expense of every other possible goal. Which would defeat the whole
usefulness of having superintelligence in the first place, even to its
possessor. It will probably want to do other things as well... perhaps
reproduce and have its decendants or copies expand outward at something
close to the speed of light. And in order to do this as *quickly* as
possible, it might benefit from whatever computational assistance it could
get. Human beings do value pocket calculators, even though human beings are
far, far smarter than pocket calculators.
The greatest stupidity is having just one goal at the expense of all
others. This is why heroin addicts are not people most of us envy, even
though they know how to keep themselves "happy" very simply.
My own suspicion is that if such an SI were brought into existence, it
would rapidly develop a set of diverse goals (some of which *might* even be
altruistic, based on a more penetrating understanding of Comparative
Advantage and similar considerations) and use its vast powers to attempt to
address all of these goals simultaneously in a balanced fashion in
realtime... always the hardest sort of problem for any intelligence to deal
with. It might even be smart enough to figure out that its mere existence
would scare the bejesus out most of the human beings it would be hoping to
suck partial computational results from (to supplement its own), and that
it might want to arrange something to redress this fear.
But any being whose primary goal was vastly increasing its own
problem-solving ability would not be so stupid as to destroy all the other
computers in existence when it could hope to spawn off components of
computational tasks to them. And remember that every human being's head is
full of unknown and possibly useful partial computational results.
(Remember Hayek's emphasis on local knowledge?) My own suspicion is that
one of its many diverse goals would be figuring out how to uplift all
willing human beings etc., in order to make maximally effective use of
those partial computational results in as short a time as possible.
I'm pretty certain that, for reasons of my own, this would be among my
many, many goals if I were to become this hypothetical first SI. Of course,
I might change my mind, and none of you have any reason to believe me
anyway (unless you've studied enough economics and social epistemology to
follow all of my hunch-ridden argument). I'm sure the paranoids will think
that this is just a trick to distract you all from my secret work in the
basement lab. ;)
Eric Watt Forste <arkuat@pobox.com> http://www.c2.org/~arkuat/
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:44 MST