From: Nick Bostrom (bostrom@ndirect.co.uk)
Date: Tue Aug 04 1998 - 19:26:13 MDT
What is superintelligence?
A superintelligence is any intellect that greatly outperforms the best
human brains in practically every field, including general wisdom,
scientific creativity and social skills. This definition leaves open
how the superintelligence is implemented. For example, it could be a
classical AI or a neural network, or a combination of the two. It
could run on a digital computer, a network of interconnected
computers, a human brain augmented with extra circuitry, or what have
you. The definition also leaves open whether the superintelligence is
conscious and has subjective experiences.
Sometimes a distinction is made between weak and strong
superintelligence. Weak superintelligence is what you would get if
you could run a human-like brain at an accelerated clock speed,
perhaps by uploading a human onto a computer [see "What is
uploading?"] If the upload's clock-rate is a thousand times that of a
biological human brain, it would perceive reality as being slowed
down by a factor of thousand. This means it could think a thousand
times more thoughts in a given time than its natural counterpart.
Strong superintelligence denotes an intellect that is not only faster
than a human brain but also qualitatively superior. Not matter how
much you would speed up a dog brain, you would not get a
human-equivalent brain. Similarly, some people think that there could
be strong superintelligence that no human brain could match no matter
fast it runs. The distinction between weak and strong
superintelligence may, however, not be at all clear-cut.
Many (but by no means all) transhumanists think that
superintelligence will be created in the first half of the next
century. This requires two things: hardware and software.
When chip-manufacturers plan new products, they rely on a regularity
called "Moore's law". It states that processor speed doubles about
every eighteen months. Moore's law has been true for all computers,
even going back to the old mechanical calculators. If it continues
to hold true for a few decades then human-equivalent hardware will
have been achieved. Moore's law is mere extrapolation, but the
conclusion is supported by more direct considerations based on what
is physically possible and what is being developed in the
laboratories today. Increased parallelization would also be a way to
achieve enough computing power even without faster processors.
As for the software problem, progress in computational neuroscience
will teach us about the computational architecture of the human brain
and what learning rules it uses. We can then implement the same
algorithms on a computer. Using a neural network approach we would not
have to program the superintelligence: we could make it learn from
experience exactly like a human child. A possible alternative to this
route is to use perhaps genetic algorithms and some methods from
classical AI to create a superintelligences that may not bear a close
resemblance to human brains.
The arrival of superintelligence will clearly deal a philosophical
blow to any anthropocentric world-view. Much more important, however,
are the practical ramifications. Creating superintelligence is the
last invention that humans will ever need to make, since
superintelligences could themselves take care of further scientific
and technological development more efficiently than humans could. The
human species will no longer be the smartest life-form in the known
universe.
The prospect of superintelligence raises many big issues and concerns
that need to be thought hard about now, before the actual
developments occur. The big question is: what can be done to maximize
the chances that the arrival of superintelligences will benefit
humans rather than harm them? The range of expertise needed to
address this question extend far beyond that of computer scientists
and AI researchers. Neuroscientists, economists, cognitive
scientists, philosophers, sociologists, science-fiction writers,
military strategists, politicians and legislators and many others
will have to pool their insights in order to deal wisely with what
may be the most important task the human species will ever face.
Transhumanists tend to want to grow into and become
superintelligences themselves. The two ways in which they hope to do
this are: (1) Through gradual augmentation of their biological
brains, perhaps using nootropics, cognitive techniques, IT tools
(e.g. wearable computers, smart agents, information filtering
systems, visualization software etc.), and, in the future, neuro/chip
interfaces and bionic brain implants. (2) Through mind uploading.
_____________________________________________________
Nick Bostrom
Department of Philosophy, Logic and Scientific Method
London School of Economics
n.bostrom@lse.ac.uk
http://www.hedweb.com/nickb
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:25 MST