From: Doug Bailey (Doug.Bailey@ey.com)
Date: Mon Sep 14 1998 - 14:51:32 MDT
All this talk about technological singularities has gotten
me thinking about superintelligence (SI). All the normal issues came
up about feasibility, nature, motivation and so on. I segmented
singularity scenarios into ones where SIs were not developed and
ones where SIs were developed. I thought such a distinction was
meaningful but now I am not so sure.
I began to think about the ultimate potential of an SI versus that
of a human-level intelligence (HI) or even an augmented human-level
intelligence(>HI). A SI would be able to access copious amounts of
information quickly, sort through the information with a selectivity
that would make us envious, and employ a level of insight and
creativity that would allow it to achieve "breakthroughts" many
times faster than a HI. A SI's cognitive abilities might have traits
we can not directly compare to those of HIs or >HIs. However, taken
at face value, this seems to mean only that a SI would reach milestones
at a faster rate. But would it reach more milestones in totality? Are
there achievements, discoveries, insights, cognitive feats that a SI
is capable of that a HI/>HI (or billions of HIs/>HIs) could not
accomplish given enough time and resources?
An example is Einstein's General Theory of Relativity. If we could
place a SI in a room in 1600 A.D. and supply it with all the available
knowledge (especially physics) known to HIs at that point, e.g. the
works of Brahe, Galileo, Copernicus, etc.), how long would it take the
SI to develop the Calculus, discover the conservation of momentum,
Newton's laws, and all the other insights necessary for it to develop
Einstein's General Theory of Relativity? It would probably get the job
done before 1915. However, HIs would be able to match the SI's feat,
just through a longer, more tedious process. The SI might be more
efficiently intelligent but not necessarily more profoundly intelligent.
So, could a SI discover something that a HI could not? Is the ultimate
potential of a SI greater than that of HIs/>HIs? Frankly, my head
starts to hurt when I think about these questions. It would exceedingly
hard to absolutely indentify something a SI could do that we could not
since the feat would be so entirely beyond our minds that mere
identification would be impossible. The feat would have to be beyond
the wildest limits we can imagine. While we don't know how to engineer
at Planck length scales, we don't know for certain that we will never
be able to do soor that SIs will most certainly be able to do so.
I can't think of anything I know we will never be able to do that a
SI could do. I can't think of an insight that a SI might have that,
given enough time and coaching, that we could fathom or achieve. The
SI in the example above might be talking about relativity to Isaac
Newton and friends, who might think the whole idea is nonsense / blasphemy
or what have you. But eventually someone would "get it".
This might seem like anthropic hubris on my part. The idea that since I
can't think of something that a SI could accomplish that HIs could not
acomplish given enough time might seem prideful. However, barring some
fundamental limitation due to the way HIs think or process information
that would inihibit us from understanding some particular SI insight, it
seems that anything a SI knows we could eventually know. This is all a
rather long-winded prelude to my idea of "virtual SI".
VIRTUAL SI
Are we SIs? This question might seem silly but I'm serious. To answer this
question we need a definition of intelligence. Not being a cognitive expert,
I attempted to find this definition. Unfortunately, there seems to be a lack
of consensus. I'll struggle through this question anyway hoping the cognition
cogniscienti will supplement and revise my comments where necessary.
Nick Bostrom defined superintelligence in his paper "How Long Before
Superintelligence?" at http://www.hedweb.com/nickb/superintelligence.htm
thusly:
By a "superintelligence" we mean an intellect that is much smarter
than the best human brains in practically every field, including
scientific creativity, general wisdom and social skills.
If I handed this definition to a philosopher in Roman times and then asked
him to devise a test for some of the list members on this list. Would the
philosopher judge the answers given on this test to be those of SIs? The
philosopher would see selective cognitive ability far superior to his own.
The list members would be extraordinarily efficient at solving problems and
filtering through data by using the scientific method and other cognitive
tools devised between Roman times and now. What if the list members were
able to use their laptops for the test? The philosopher might scratch his
head as he sees the answers to multiplication problems involving 90 digit
numbers (not to mention no apparent calculations). [He'd probably also
wonder where the list members got those suped-up abaci from]. The philosopher
might well conclude the list members were possessed with scientific
creativity, wisdom, knowledge, and cognitive tools of a superintelligence.
The ability to accumulate knowledge allows HIs as a collective to defy
individual HI resource limitations such as time, memory, and processing speed.
Einstein "downloaded" the knowledge of others before him and used his scientific
creativity to make his breakthrough insights. The accumulation of knowledge
allowed Einstein to perform a feat in 1915 that only an SI might have been
able to perform in the 1700s.
Stepping back further in time we see the "Virtual SI" concept more clearly.
The hunter-gatherers of 20,000 B.C.E. would consider us "gods". Our motivations
would seem totally alien to them. Our thinking, our feats, our accomplishments
the stuff of legend. If they were armed with the defintion of superintelligence
and could understand it they'd consider us SIs also.
I admit this "Virtual SI" idea is empirically weak. I've not developed it
much beyond what you see here. However, the idea works in concert with my
wanderings about the "ultimate potential" issue. [Warning: question barrage
to ensue]. Should we redefine SI as being something beyond what HIs can ever
dream of reaching? Otherwise, would not a transhuman that had been uploaded
with the complete corpus of human knowledge available in short-term memory,
highly optimized cognition filtering algorithms, and creativity algorithms
be considered by us to be a SI? Where does the partition between >HI and
SI exist? At what point do the changes we make to the neural net of a >HI
render it something other than "human"?
[Note: My apologies for the less-than-stellar organization but I wrote this in
one pass. If I had waited and posted it after I had time to optimize it, it
would have never seen the light of day.]
Doug Bailey
doug.bailey@ey.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:34 MST