Re: [>Htech] The Age on Cyborgology

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Tue Nov 21 2000 - 10:50:27 MST


The problem with public "reinterpretation" of scientific data and/or
progress is that it loses so much in the translation that it ends up
creating grossly distorted impressions. This is an attempt to inject
some "reality" into the discussion

Regarding:
>http://www.theage.com.au/news/20001105/A26872-2000Nov4.html
>Want to live to 200? Being a cyborg has advantages
>By GARRY BARKER
>TECHNOLOGY REPORTER
>Sunday 5 November 2000

>If quantum theories develop into practical computing, machines may begin to
>emulate and then surpass human intelligence.

Poppycock!! Everyone loves to invoke "quantum computing" as a requirement
for computers surpassing human intelligence. There are *two* problems
with this.

First it isn't necessary. Existing trends in microelectronics will get
us petaflops (10^15 Ops) computing with IBM's Blue Gene within a few years.
Estimates of human brain calculation equivalence range from 10^13 Ops
(Moravec, 1987) to 10^17 Ops (McEachern, 1993), so a petaflops is in
the ballpark. Special purpose hardware (Pact GmBH's multiply & accumulate
chips or Hugo de Garis's neural network machines) will put us at desktop
equivalence of the human brain by 2005 and 2010. However as work by de Garis
and Doug Lenat (@ Cyccorp) have shown, the problem is *not* hardware equivalence.
Its figuring out good ways of programming the net. You have to bear in
mind that it takes 2-6 *years* of full time learning for a human brain
"net" to program itself so it is interesting to interact with. Even if you
had lots of human equivalent desktop machines by 2010, it is a very open
question at this point how long it would take them to "learn" enough to
be considered "intelligent". Now, the interesting thing about resident
computer intelligence is that you can copy it from machine to machine
much faster than you can propagate it among humans.

The second thing about quantum computing that people MUST remember is
that I have seen nothing that says it can be used for "general purpose"
computing. Charles H. Bennet from IBM Research who is one of the world's
leading experts on the physics of computation, wrote a paper in 1994
entitled "Strengths and Weaknesses of Quantum Computing". People should
listen to the *experts* in a field such as leading physicists, not the
popularizers such as Kurzweil (a programmer), or worse yet newspaper reporters.

The bottom line is that different computer architectures will be useful
for very different problems. Multiply&accumulate or neural-net based
computers may be useful for "fuzzy" intelligence. Cellular automata like
machines (e.g. Blue Gene) may be best for simulations. Quantum or DNA
computers may be best for factoring or solving traveling salesman problems.

It requires discipline not to use buzz terms like "quantum computers"
in an area where they may be inappropriate.

>The question is whether the enormous power of quantum computers will allow
>them to learn human levels of logic, reason and innovation. Could they, for
>example, feel love, hate and compassion? In short, will a computer's brain
>have what in humans we call a mind?

Love, hate & compassion are hard-wired into humans at the genetic,
neurological and biochemical levels. I believe the leading candidate
for the "love" drug currently may be oxytocin but other factors are
probably involved. If we "program in" or "select for" traits that
create human or "mind" like characteristics in our computers, cyborgs
or robots, the answer is *yes* they will have minds.

>Ray Kurzweil, another eminent American technologist, is more gloomy. He
>predicts that human identity will be called into question by the massive
>computers of the future.

Human "minds" can evolve and adapt. Human bodies will become irrelevant
in the long term. Because human bodies cannot avoid local hazards
(earthquakes, floods, hurricanes, etc.) bodies will eventually be
irreparably damaged. The interim period, which could well last
thousands of years depending on how "attached" we remain to our
bodies, will be one of gradual uploading as human minds expand
within computer hardware (perhaps not unlike how the developing
brain of a fetus or young baby expands within its growing head).

>By 2020, a mere two decades from now, technology will be producing
>computers faster and more powerful than the human brain.

As pointed out above, we will get this by 2005-2010. There is no
guarantee however that we will be able to program or teach them to
have human like intelligence by 2020 or even 2030.

>But computing power is only part of the story. It is the networks that
>really give computers their potent and omnipotent future.

This is critical -- in the brain it is *not* the raw processing power
that creates much of the capacity, it is the communications bandwidth
between all of the computational nodes (neurons). It is a very open
question when we will be able to build computing logic elements that
have the number of raw inputs and outputs that the brain has. Matching
the communications throughput of the brain is probably a much more
significant barrier for some aspects of intelligence than matching
its processing power.

>About 2050 or so, says Mr Kurzweil, the computers, by then capable of
>reasoning for themselves, could decide we humans are too slow, too
>ignorant, too petty and too argumentative to be tolerated.

Poppycock again. If you link the humans to the computers using the
technologies discussed (neural implants and high bandwidth radio
or fiber connections), then you can't create such a simple dividing
line between artificial minds and human minds.

>Why would a computer wish to associate with us?

Why would a human want a frail human body if it could do away with it?

> Just as we have allowed the information age to create increasingly
> divided societies of haves and have-nots, so will it be
>we humans who will create a future of cyborgs and mega-computers.

Self-replicating systems based on biotechnology will solve much of the
"have-not" problem. Full blown molecular nanotechnology is not required.

>Mr Kurzweil quotes Daniel Hillis, a noted computer engineer: "I'm as fond
>of my body as anyone else, but if I can be 200 with a body of silicon, I'll
>take it."

If the problem of aging is resolved through the clever application
of first biotechnology and then nanotechnology, then the average
lifespan determined by the U.S. present day accident rate would
be ~2000 years. Who would want to live 2000 years without being
able to grow and evolve beyond the limitations of a standard-issue
human brain?

>Then, are you a human or a computer, and who's side are you on?

Its always easy to cast as "us" vs. "them". Instead we merge as the
humans who are not too afraid to evolve, do so, perhaps in the
end choosing to leave the planet to those who choose not to do so.

Robert Bradbury



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:32:03 MST