From: J. R. Molloy (jr@shasta.com)
Date: Wed Sep 05 2001 - 10:39:12 MDT
From: "Zero Powers" <zero_powers@hotmail.com>
> it seems to me, that what he is proposing is some sort of symbiosis
> between the biological and artificial intelligences.
<<As far as Hawking's second recommendation is concerned, namely direct
connection between the brain and computers, I agree that this is both
reasonable, desirable and inevitable. It's been my recommendation for years.
I describe a number of scenarios to accomplish this in my most recent book,
The Age of Spiritual Machines, and in the book précis "The Singularity is
Near">>
--Raymond Kurzweil
[Refer to message Sent: Tuesday, September 04, 2001 3:26 PM
Subject: RE: Hawking on AI dominance]
> At that point we pathetic meat-puppets had
> better cross over and abandon the bio-brain, or get seriously left behind.
<<I recommend establishing the connection with noninvasive nanobots that
communicate wirelessly with our neurons. As I discuss in the précis, the
feasibility of communication between the electronic world and that of
biological neurons has already been demonstrated. There are a number of
advantages to extending human intelligence through the nanobot approach.
They can be introduced noninvasively (i.e., without surgery). The
connections will not be limited to one or a small number of positions in the
brain. Rather, the nanobots can communicate with neurons (and with each
other) in a highly distributed manner. They would be programmable, would all
be on a wireless local area network, and would be on the web.
<<They would provide many new capabilities, such as full-immersion virtual
reality involving all the senses. Most importantly, they will provide many
trillions of new interneuronal connections as well as intimate links to
nonbiological forms of cognition. Ultimately, our minds won't need to stay
so small, limited as they are today to a mere hundred trillion connections
(extremely slow ones at that).
<<However, even this will only keep pace with the ongoing exponential growth
of AI for a couple of additional decades (to around mid-twenty-first
century). As Hans Moravec has pointed out, ultimately a hybrid
biological-nonbiological brain will ultimately be 99.999...% nonbiological,
so the biological portion becomes pretty trivial.
<<We should keep in mind, though, that all of this exponentially advancing
intelligence is derivative of biological human intelligence, derived
ultimately from the thinking reflected in our technology designs, as well as
the design of our own thinking. So it's the human-technology civilization
taking the next step in evolution. I don't agree with Hawking that "strong
AI" is a fate to be avoided. I do believe that we have the ability to shape
this destiny to reflect our human values, if only we could achieve a
consensus on what those are.>>
--Raymond Kurzweil
[Refer to message Sent: Tuesday, September 04, 2001 3:26 PM
Subject: RE: Hawking on AI dominance]
----------------------------------
From: "James Rogers" <jamesr@best.com>
> I don't suppose you remember the Tandem Non-Stop systems and similar devices
> from well over a decade ago. Extreme fault tolerance. On some of those
> systems, essentially every transient action was in persistent memory. Of
> course, the occasional crash is usually much, much cheaper than buying an
> unstoppable computer for most businesses/people.
Fault tolerance may be an important component of "strong AI." In addition,
fault tolerant systems make it possible for AI to supercede the pitfalls
associated with the problem of confabulation which plagues human cognition
(too often resulting in cockamamie stories about UFO abductions,
synchronicity, spiritualism, etc.). Unfortunately, fault tolerance cannot
restore previous clarity to machines infected with software viruses. The human
correlate to such viruses is "values" which are infectious memes with no known
cure.
--J. R.
Useless hypotheses, etc.:
consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism, GAC, Cyc, Eliza, cryonics, individual
uniqueness, ego
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:10:23 MST