From: J. R. Molloy (jr@shasta.com)
Date: Wed Jul 25 2001 - 17:21:44 MDT
From: "Sean Kenny" <seankenny@blueyonder.co.uk>
> I'm struggling through Deutsch's book "The Fabric of reality" at the moment,
> about half way through after several months (and it's only a short book!),
> but I was talking to a friend the other day about some of the ideas and it,
> and he said he struggled with being able to accept that consciousness would
> be able to survive millions of different versions of itself being peeled off
> in different directions every second. I can't say I'd thought of it before
> but I expect some version of myself had a decent answer for him.
Yes, let's consider that you've made the right decision in struggling through
Deutsch's book, or at least the version of the book that found it's way into
the version of your hands at the time of that version of you. The version of
your friend with whom you talked on the version of that day (in the version of
your memory that you recall) may have progressed to more accurate versions of
modeled reality if the version of you and he had thought in other categories
than "consciousness," which, like phlogiston, has been debunked in this
consensus version of existence. ©¿©¬
Please excuse that version of inquiry, but Many Worlds do seem to invite it,
don't they?
Does Deutsch mention "consciousness" in "The Fabric of reality"?
Here's an article which may help to explain why I don't believe in
"consciousness."
(This was posted to the list by Max More several months ago.)
Note this quotation in particular:
<<Dr. Terrence Sejnowski, director of the Computational Neurobiology
Laboratory at the Salk Institute, agrees that the properties of the brain
can likely be duplicated in artificial devices, although he regards
questions of consciousness with a decidedly unphilosophical bent of mind.
[Something else I don't believe in. --J. R. ]
"My own suspicion is that words like 'consciousness' and 'qualia' will go
the way of words like 'phlogistem' and 'vitalism.'">>
What Neuroscientists and Computer Scientists Can Learn from Each Other
By Karl A. Thiel
http://www.doubletwist.com/news/columns/article.jhtml?section=weekly01&nam
e=weekly0130
Metaphors are powerful things. Ever since the first digital computers were
created in the 1950s, we have had a seemingly irresistible urge to
anthropomorphize them, likening computers to artificial brains,
anticipating and sometimes even fearing the powers of intention and
cognition they might someday embody.
And why not? Computers, like us, seem to understand language of a sort.
They take input from various sources, process it and offer
conclusions--much like we seem to do. With the right instruction, they can
be trained to do new things, just as we can. They can do all these things
and more, and do it much more quickly and powerfully than we can, making
computer intelligence a humbling prospect. As computer pioneer Marvin
Minsky once quipped, we will "be lucky if they are willing to keep us
around as household pets."
Parallels between computers and brains have not simply been offered up as
a piece of poetic comparison. It is meant is some quarters quite
literally: The brain is a computer, and the mind is its program. Many
efforts to create artificial intelligence have focused on determining how
to replicate in computers what some believe happens in our brains: We
receive discrete bits of information about the world from our senses,
decode them in our brain, and assemble them into coherent pictures of
reality. By apprehending the world and applying an inherent sense of
logic, reasoning ensues, and with it an ability to solve problems--even
unfamiliar ones in novel contexts.
It's a terrible metaphor, says Dr. Gerald Edelman, and a damaging one.
Edelman, who won the 1972 Nobel Prize in medicine for his discoveries
about the structure of antibodies, has spent the last two decades studying
the brain. He says the notion that the brain is a kind of supercomputer
hasn't just misled computer scientists pursuing artificial intelligence;
it has distracted us from a better understanding of how the brain really
works.
And yet, computers really do seem to be getting smarter. Even as advances
in neuroscience shed light on the "black box" of the brain, new approaches
to computer programming are achieving results that seem more and more like
intelligence. In fact, the two disciplines have a great deal to teach each
other. Some of the most fascinating parallels may be found right at the
Neuroscience Institute in La Jolla, CA, which Edelman founded and directs.
The Theory of Neuronal Group Selection
Edelman has set out to "complete the project of Darwin"--that is, to show
how natural selection could lead to the emergence of consciousness and the
mind of humankind. It was a notion Darwin struggled with in his later
years, believing his theory could explain all aspects of the origin of
species, even though his colleague Alfred Russel Wallace rejected the
notion.
"You just can't get around Darwin," says Edelman. "Darwin's theory is the
central theory of biology. Anything that says it's going to emulate
biology by ignoring that and doing it instructively instead of
selectionally is barking up the wrong tree."
Most of us are used to thinking of natural selection as a slow process
that occurs over generations. But Edelman's own Nobel Prize-winning work
on the human immune system proves that the creation of diversity and the
process of natural selection can lead to powerful change on a scale of
seconds rather than decades. So why not the brain?
Computer scientists daunted by the idea of the brain as a "Darwin machine"
will be relieved to learn that the tenets of Edelmans "Theory of Neuronal
Group Selection" (TNGS) are few in number. In fact, he says, the theater
in our mind is a result of just three essential processes: Developmental
selection, experiential selection, and reentry.
The broad principles may be few but there's a lot of wiring required.
There are only two major cell types in the brain--neurons and glial cells,
with the signaling activity coming from neurons, of which there are over
200 subtypes. In the course of development, the brain forms an absolutely
astounding number of neurons--over 100 billion; about 10 billion in the
cerebral cortex alone, which reach out to form over a million billion
connections with each other. This circuitry represents an absolutely
unfathomable number of potential circuits.a number with millions of zeroes
after it.
While the development of the gross morphology of the brain is dictated by
gene expression, the complexity of interconnections among neurons goes
beyond the genetic code. A dense web of neural connections is formed
stochastically--not at random, perhaps, but certainly unpredictably. No
two people--not even identical twins--have identical brain circuitry.
For natural selection to take place, it must occur in an environment of
sufficient variety that alternatives can be chosen--which the complexity
of our brains certainly provides. The formation of myriad connections
during development is important in this regard, because during the course
of your lifetime, you will define new neural pathways, using experience to
strengthen some routes and weaken others--even as neurons die and others
are formed.
Edleman first laid out his TNGS in the 1987 book Neural Darwinism--at a
time when some of the scientific support for the ideas within was
equivocal. Until relatively recently, for example, it was believed that no
new neurons could form during ones lifetime, but neurogenesis in the adult
human brain was proven by a team of researchers in a 1998 study published
in Nature Medicine. The notion that existing pathways can be strengthened
and weakened as a result of experience, and that this is related to
learning, was a subject of the 2000 Nobel Prize in medicine, awarded to
Dr. Eric Kandel of Columbia University (New York, NY).
Do You, Uh, Google?
To understand how this process of experiential selection works, consider
the search engine Google.com. Before Google, Internet search engines
generally worked under one of two methods. The first involved "crawler"
programs that search out keywords across the worldwide web, ranking
pages--any pages--in order of the best keyword match. The results are
familiar to many of us: enter a query about "sexual reproduction," and
you'll likely get a pornography site. Enter a query about "recombinant
DNA," and you still might get a pornography site.
With the Tower of Babel that the Internet has become, others thought a
little human oversight was necessary, leading to the second type of search
engine. Yahoo is perhaps the most prominent example of this "index"-based
approach, in which teams of real people categorize web pages--meaning that
categories tend to truly contain what you'd expect. The downside is that
the web grows far faster than any team of people can index pages, and such
a system is necessarily incomplete. But the founders of Google looked at
the Internet a little differently. Individual web pages abound, connected
to their respective servers but also to each other by hyperlinks. It is
indeed tempting to view them as neurons, linked in some broadly
characterizable ways but interconnected in a fashion so numerous and
complex (and changing so rapidly) that it defies any comprehensive human
attempt at mapping.
But all pathways are not equal. The Internet, in a more-than-metaphorical
sense, evolves over time. A new page appears and, if it is authoritative
and useful, gets visited by people. Other pages link to it. And each link
is a sort of vote in favor of that page, an endorsement of its relevance
and utility. But this is no democracy, and all votes are not equal. Votes
from pages that are themselves ranked high in importance are given more
weight than others. In a very real sense, some pathways through the
Internet are more heavily traveled than others, and Google looks for
these. It doesn't define the pathways, but it uses them in ranking pages.
In the brain, pathways are constantly changing, and it is these changes in
part that allow us to learn and adapt and have conscious experience,
Edelman says. Observations of infants have led to some remarkable theories
about brain development in this regard. Infants almost always try to grasp
at an object held near them, but newborns are seldom successful at first.
In fact, they flail more or less at random, not having inherited a
"program" that tells them how to coordinate the information coming in
their eyes with the movements of their hands. Not until they achieve
success--at first, essentially by luck--do they learn to distinguish
successful behaviors from ineffective strategies. This example and a
million others that occur throughout our lifetime are how experiential
selection takes place--some neural pathways strengthened while others are
weakened, or pathways created while others are destroyed.
The final wiring requirement for consciousness is reentry. To understand
reentry, Edelman and his Neuroscience Institute colleague Giulio Tononi
suggest you imagine a string quartet beginning to play without sheet
music. The players begin improvising their own tunes, oblivious of what
the others are playing. But quickly their awareness of what the others
players are doing cause them to act in concert, so to speak. This metaphor
doesn't indicate anything of the complexity of reentrant connections in
the brain, however, and Edelman and Tononi go on to suggestion that you
imagine thousands of strings connecting the arms of each player to each of
the others. This way, other players are immediately aware of their
fellows' movements, drawing independent action almost instantly into
concert. This is reentry, on a tiny scale.
Don't confuse this with feedback, which various computer systems could be
said to demonstrate. Feedback happens in two directions. Reentry happens
on a massive scale, with neuronal groups connected tightly to neighboring
neuronal groups, slight less tightly to more distant regions of the brain,
but essentially meaning that when one part of the brain is active, the
rest is aware. "Reentry is the binding principal of the brain--it's the
way in which dynamically you have higher order selection that correlates
map to map," says Edelman.
How Does the Light Go On?
But no matter how complex and adaptive our brain's wiring is, how does it
result in the subjective experience of consciousness? Here at the troubled
crossroads of science and philosophy lies the nut of the problem.
Consciousness, as any good solipsist understands, is something you know
only about yourself. You cannot prove that you are not alone in a world of
zombies that behave the same way you do, but without the flavor of
consciousness--able to identify react to stimuli with accurate perception
but not knowing what it feels like the way you do.
Edelman doesn't deny the problem; but he does believe that we can study
consciousness nonetheless. And one way to do it is by simulating
it--perhaps one day even creating it--in machines.
"[T]he only way we may be able to integrate our knowledge of the brain
effectively, given all its levels, is by synthesizing artifacts," Edelman
wrote in his 1992 book Bright Air, Brilliant Fire. The ultimate version of
such an artifact would be a conscious machine, or what some might call
true artificial intelligence. Eight years later, he still believes that it
may eventually be possible to build such an artifact. "As we go along, I
have no doubt we'll be able to simulate more and more complex dynamics of
the brain.and we will get to that point where we've struck something that
will integrate into primary consciousness," says Edelman. "But the
language part--oh, boy. That's going to be hard."
Even if such a feat were achieved, however, it would still be hard to
silence critics. "If you asked me whether I think dogs are conscious, I'd
say yes, but I can't prove it," says Edelman. "All I can tell you is that
[dogs have] all of the structures that we know are essential for
consciousness. They have a behavioral repertoire that is suggestive of tha
t fact." The same will go for a machine with primary consciousness, and
even a machine with a higher-order consciousness and an advanced
linguistic ability will be unable to prove its consciousness. Instead,
we'll likely be reduced to comparing the electrical activity of its
artificial brain to a human brain and administering some sort of Turing
test--basically seeing if we can distinguish it from a human by
questioning it. "The philosophical problem is not going to go away, but I
wouldn't elevate it to the highest metaphysical proportions," says
Edelman.
Dr. Terrence Sejnowski, director of the Computational Neurobiology
Laboratory at the Salk Institute, agrees that the properties of the brain
can likely be duplicated in artificial devices, although he regards
questions of consciousness with a decidedly unphilosophical bent of mind.
"My own suspicion is that words like 'consciousness' and 'qualia' will go
the way of words like 'phlogistem' and 'vitalism.'"
"If you go back a hundred years," he explains, "one of the biggest
scientific questions was 'what is life?' And one of the most prominent
theories had to do with vitalism--some substance, some thing that is
transmitted from cell to cell, animal to animal, that is the essence of
life. Well, you don't hear anybody talking about vitalism anymore. We've
come far enough to see all the mechanics--we've seen how DNA works, we've
seen all the pieces of the cell, and we don't have need for a hypothesis
like vitalism." So it will go, Sejnowski suspects, with consciousness.
(Phlogistem, incidentally, refers to a theoretical substance that people
once sought in combustible material, thinking it made up the "substance"
of fire.)
Not Proof, But.
A less decisive proof may come from a future version of a robot, or
"brain-based device" called NOMAD (for Neurally Organized Mobile Adaptive
Device) at the Neurosciences Institute. NOMAD looks a little like the
computer-controlled robots used in artificial intelligence experiments of
years past--a device on wheels, rolling around in a world of simple
shapes, looking out through a small camera. There's a big difference,
however.
"When you look at it, it's pretty stupid," acknowledges Edelman. "All it's
doing is going around a room picking up blocks. But when you understand
what it's really doing, it's a real show-stopper. If you brought a
hardcore AI programmer, he'd say 'I could write a program in 500 lines
that would do better than that.' The answer is yes--you're doing it. This
thing does it by itself."
Take a small detour here to consider one of Edelman's theories of
consciousness. How does our memory work? It's tempting to fall back on the
computer metaphor--computers use a certain part of their brain to record
bits of their experience, and so do we.right? Not so. It's an odd concept,
but Edelman says we don't really record memory at all. Memory is a system
property.
This is where the intersection of theory and experiment becomes a bit
hairy. After all, the brain is still mysterious in many ways. While
Sejnowski notes that we have fairly detailed knowledge of the individual
components of the brain, "we're still at the very beginning stage of
putting them back together." Part of Sejnowski's own work involves what he
jokingly refers to as the "Humpty Dumpty Project"--building complex
computer models of individual neural cell types and linking them together
into detailed computer simulations. His team has found that simulations
accurately model the overall electrical patterns seen in the brain; the
ultimate goal will be to have a detailed enough simulation to work in
reverse--to model an overall brain pattern associated with some experience
and trace it back to its individual neural origins.
We know which "part" of the brain controls various aspects of
consciousness or function by using imaging devices that show electrical
activity over time or by analyzing in detail the activity of a small
number of neurons. That a particular part of the brain is active during a
certain function, however--or that the loss of that area knocks out a
certain function--doesn't mean you've located the function. You can't find
the brain's "memory chip" that way. Although certain parts of the brain
are heavily associated with memory, they may be hubs of activity that
nonetheless rely on other parts--and that, Edelman says, is indeed the
case. When it comes to consciousness, asking a question like "where does
memory reside?" is kind of like asking "where is the Internet?" You could
pinpoint some hubs that are more important than others, but it would be a
mistake to address these as "the Internet." Likewise for memory, vision,
proprioception, and just about any other aspect of consciousness. It's
difficult to grasp--how could Marcel Proust nibble a petite madeleine cake
and have 10 volumes of Remembrance of Things Past come flooding in,
without some sort of neural recording device?
Memory, in Edelman's view, occurs when, by will or new experience, through
interaction with the environment or within your brain or both, you explore
previous defined neural pathways. Because an experience was frightening,
or pleasant, or odd, or moving, your brain's circuitry adapted to it and,
across reentrant pathways, linked it to other aspects of your
consciousness, some of which may have seemed unimportant at the time--a
smell, perhaps, or the taste of a petite madeleine. (Note that, almost by
definition, it is difficult to remember something boring without a
conscious effort. Boringness does not trigger a lot of Darwinistic
response--you're more likely to remember the experience of being bored,
which is unpleasant, rather than boring content, which is just plain
boring.)
NOMAD may not offer proof of this notion, but it offers an amazing piece
of support. The device has no memory, not in the computer sense at
least.but it remembers. It does not record its memories in a computer chip
or in any other fashion; it simply undergoes a reshaping of its simulated
neural pathways in response to the world it encounters--and memory
results. "NOMAD has a memory," says Edelman. "If you watch NOMAD before
and after it has learned something, you yourself can see in its behavior a
reflection of memory."
Meanwhile, in the Computer World
The application of Darwinistic forces to abstract problem-solving has also
found a foothold among computer scientists. At Natural Selection in La
Jolla, programmers have used "evolutionary computation" as a schema for
letting computers approach unique problems in unique contexts. In broad
terms, says president Dr. Lawrence Fogel, this means computers are left to
approach problems in a combinatorial fashion, making a "population" of
strategies and culling out those that are unfit, duplicating and then
"mutating" successful strategies as a means of refinement. The programs
can either start out with no information beyond the overall constraints of
the problem, or be given a starting population representing the
programmer's acquired wisdom.
To Fogel, this is artificial intelligence--"that is, simply, the ability
to solve new problems in new ways. In fact, I would go further--I don't
call it artificial if a computer can do it; I call it intelligence. The
machine may not be aware of it, it may not be conscious, but it's
intelligence."
Fogel's interest lies in problem solving, not creating consciousness. And
whether or not intelligence requires consciousness, of course, is a
semantic argument. (John McCarthy, who coined the term "artificial
intelligence," is said to have claimed that even a machine as simple as a
thermostat has beliefs. A thermostat believes three things: 'it's too hot
in here,' 'it's too cold in here,' and 'it's just right in here.' Hmmmm.
Most of us, however, agree that 'belief' requires consciousness and take
it as a matter of common sense that things like thermostats don't have
it.)
Fogel, who has met with Edelman and admires his theories, has used
evolutionary computing to tackle some interesting problems. For Agouron
Pharmaceuticals (La Jolla, CA, now part of Pfizer), his company used a
natural selection scheme to find drug candidates to fit the HIV protease.
The result, ultimately, was the successful protease inhibitor Viracept.
(Fogel hastens to add, however, that it wasn't Natural Selection's program
that led directly to the compound. "In linear programming, you get one
answer," he notes. "In evolutionary computation, you don't get one answer;
you get a population of answers.")
His son, chief scientist Dr. David Fogel, gained considerable attention
for the company when he wrote a program the taught itself how to play
checkers. The computer was given only the rules--no strategy or
instruction. A population of players was created and the successful
survived while the weak were winnowed out. The result was that sloppy,
random play became good enough to beat amateur and even some advanced
players.
Sejnowski agrees that biological theories have informed computer
scientists, pointing to the "genetic algorithms" created by John Holland,
now professor at the University of Michigan. Holland (who got the world's
first Ph.D. in computer science) pioneered the idea of programs as "genes"
which have to survive in a competitive environment, the fittest surviving
and mutating into better approaches. These were the precursors of the
evolutionary computation of today.
But the Darwin metaphor for the brain is limited, he adds. While Sejnowski
doesn't deny the scientific support for synaptic plasticity and
neurogenesis, he thinks the parallel to Darwinism is inexact. "If you want
to be strict--if you don't just want it to be a literary metaphor--there
needs to be a process of duplication. You need to take something that is
successful, make many copies, and mutate it. There's nothing that
corresponds to that in the brain."
Indeed, Edelman's theory makes no mention of the brain reproducing
identical pathways that have met with success; rather, there is enough
diversity developed during development and added by the growth of new
neurons and synapses, that the process of natural selection can proceed
without literal duplication.
Brave New Machines?
What is clear is that neuroscientists and computer scientists have greatly
benefited from one another. "Neuroscience has already had a big impact on
computer science," says Sejnowksi. Not least of all is the fact that the
brain is the original model for a "thinking machine," however imperfectly
digital computers represent it. But it goes beyond that. "I think it's
pretty clear that massive parallelism has won in the supercomputer
business, he says. "The general principle you take away looking at the
brain is that lots of small processors--even if they're individually not
very powerful-- can solve enormously complex problems very efficiently."
Another example might be found in Hewlett-Packard's Teramac, a computer
that operates despite having over 220,000 flaws in its CPU. Dr. Phil
Kuekes, who led the team that designed the Teramac, explains that the goal
was to prepare for molecular scale computers that must tolerate a
considerable number of manufacturing errors--something standard
microprocessors cannot do.
"There are good biological analogs," Kuekes says. His team "invested a lot
in hardware," putting in switches at every transistor that can reroute
current in case of a flaw. "If you put glasses on a kitten that turns
everything upside, the brain wires appropriately," he observes. "Based on
external circumstances, some wiring gets done later on. That's what we
did--based on external circumstances, we may find defects. Then some
wiring gets done later on." This is analogous to what Edelman calls
"degeneracy"--the ability of the brain to do the same thing in more than
one way.
If Edelman is correct in his broad theory of neural Darwinism, neuronal
group selection, and the origin of consciousness, it will likely be many
years before skeptics are convinced--if they ever are. Perhaps it will
require a conscious machine. But in the meantime, many isolated aspects of
modern neuroscience are finding their way into computer science. From the
pathway selection of Google to the redundant structure of HP's Teramac,
from the "evolutionary compuation" of Natural Selection to emerging
"neural networks," from the massive parallelism powering cutting edge
supercomputers to the distributed computing making networks better and
more reliable, computer systems are looking a lot more like the brain
these days. And of course, it works the other way: Some of these hardware
and software advances may well find their way back to a future version of
NOMAD or some similar brain-based device.
But in the end, is artificial intelligence--the kind that involves
consciousness--really what we want? Edelman believes brain-based devices
like NOMAD will not just teach us about they brain, they will prove
incredibly valuable in their own right. "We don't want to be too snobbish,
he says. " I personally think someone is going to make billions of dollars
when we have brain-based devices that can go beyond what we're now doing.
Because the brain preceded logic--after all, Aristotle came later in human
history. The brain gave rise to culture and culture gave rise to
Aristotle. Prior to that, selectionism is what counted. If that's the
case, then it's perfectly obvious that brain-based devices are going to
supplement the computer in remarkable ways."
Edelman adds that the Neurosciences Institute was recently visited by a
researcher from IBM. "He's trying very hard to persuade his colleagues
that they need to wake up to this approach."
©¿©¬
Stay hungry,
--J. R.
Useless hypotheses, etc.:
consciousness, phlogiston, philosophy, vitalism, mind, free will, qualia,
analog computing, cultural relativism, GAC, Cyc, Eliza, and ego.
Everything that can happen has already happened, not just once,
but an infinite number of times, and will continue to do so forever.
(Everything that can happen = more than anyone can imagine.)
We won't move into a better future until we debunk religiosity, the most
regressive force now operating in society.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:09:09 MST