From: Amara D. Angelica (amara@kurzweilai.net)
Date: Tue Sep 04 2001 - 16:26:03 MDT
Ray Kurzweil responds to Stephen Hawking on CNN at 7:30 PM ET, Sept. 4.
Ray’s response, posted to to KurzweilAI.net’s MindX forum:
http://www.kurzweilai.net/mindx/frame.html?main=show_thread.php?rootID%3D238
6%23id2402
Stephen Hawking recently told the German magazine Focus that computers were
evolving so rapidly that they would eventually outstrip the intelligence of
humans. Professor Hawking went on to express the concern that eventually,
computers with artificial intelligence could come to dominate the world.
Hawking’s recommendation is to (i) improve human intelligence with genetic
engineering to "raise the complexity of ... the DNA" and (ii) develop
technologies that make possible "a direct connection between brain and
computer, so that artificial brains contribute to human intelligence rather
than opposing it."
Hawking’s perception of the acceleration of nonbiological intelligence is
essentially on target. It is not simply the exponential growth of
computation and communication that is behind it, but also our mastery of
human intelligence itself through the exponential advancement of brain
reverse engineering.
Once our machines can master human powers of pattern recognition and
cognition, they will be in a position to combine these human talents with
inherent advantages that machines already possess: speed (contemporary
electronic circuits are already 100 million times faster than the
electrochemical circuits in our interneuronal connections), accuracy (a
computer can remember billions of facts accurately, whereas we’re hard
pressed to remember a handful of phone numbers), and, most importantly, the
ability to instantly share knowledge.
However, Hawking’s recommendation to do genetic engineering on humans in
order to keep pace with AI is unrealistic. He appears to be talking about
genetic engineering through the birth cycle, which would be absurdly slow.
By the time the first genetically engineered generation grows up, the era of
beyond-human-level machines will be upon us.
Even if we were to apply genetic alterations to adult humans by introducing
new genetic information via gene therapy techniques (not something we’ve yet
mastered), it still won’t have a chance to keep biological intelligence in
the lead. Genetic engineering (through either birth or adult gene therapy)
is inherently DNA-based and a DNA-based brain is always going to be
extremely slow and limited in capacity compared to the potential of an AI.
As I mentioned, electronics is already 100 million times faster than our
electrochemical circuits; we have no quick downloading ports on our
biological neurotransmitter levels, and so on. We could bioengineer smarter
humans, but this approach will not begin to keep pace with the exponential
pace of computers, particularly when brain reverse engineering is complete
(within thirty years from now).
The human genome is 800 million bytes, but if we eliminate the redundancies
(e.g., the sequence called “ALU” is repeated hundreds of thousands of
times), we are left with only about 23 million bytes, less than Microsoft
Word. The limited amount of information in the genome specifies stochastic
wiring processes that enable the brain to be millions of times more complex
than the genome which specifies it. The brain then uses self-organizing
paradigms so that the greater complexity represented by the brain ends up
representing meaningful information. However, the architecture of a
DNA-specified brain is relatively fixed and involves cumbersome
electrochemical processes. Although there are design improvements that could
be made, there are profound limitations to the basic architecture that no
amount of tinkering will address.
As far as Hawking’s second recommendation is concerned, namely direct
connection between the brain and computers, I agree that this is both
reasonable, desirable and inevitable. It’s been my recommendation for years.
I describe a number of scenarios to accomplish this in my most recent book,
The Age of Spiritual Machines, and in the book précis “The Singularity is
Near”
(http://www.kurzweilai.net/meme/frame.html?main=/articles/art0134.html).
I recommend establishing the connection with noninvasive nanobots that
communicate wirelessly with our neurons. As I discuss in the précis, the
feasibility of communication between the electronic world and that of
biological neurons has already been demonstrated. There are a number of
advantages to extending human intelligence through the nanobot approach.
They can be introduced noninvasively (i.e., without surgery). The
connections will not be limited to one or a small number of positions in the
brain. Rather, the nanobots can communicate with neurons (and with each
other) in a highly distributed manner. They would be programmable, would all
be on a wireless local area network, and would be on the web.
They would provide many new capabilities, such as full-immersion virtual
reality involving all the senses. Most importantly, they will provide many
trillions of new interneuronal connections as well as intimate links to
nonbiological forms of cognition. Ultimately, our minds won’t need to stay
so small, limited as they are today to a mere hundred trillion connections
(extremely slow ones at that).
However, even this will only keep pace with the ongoing exponential growth
of AI for a couple of additional decades (to around mid-twenty-first
century). As Hans Moravec has pointed out, ultimately a hybrid
biological-nonbiological brain will ultimately be 99.999...% nonbiological,
so the biological portion becomes pretty trivial.
We should keep in mind, though, that all of this exponentially advancing
intelligence is derivative of biological human intelligence, derived
ultimately from the thinking reflected in our technology designs, as well as
the design of our own thinking. So it's the human-technology civilization
taking the next step in evolution. I don’t agree with Hawking that "strong
AI" is a fate to be avoided. I do believe that we have the ability to shape
this destiny to reflect our human values, if only we could achieve a
consensus on what those are.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:10:23 MST