From: Hal Finney (hal@rain.org)
Date: Tue Nov 25 1997 - 11:14:21 MST
The current model for how the brain works is that it is a network of
interacting neurons. The neurons interact at points of (near) contact
called synapses. There, neurons are bombarded with various neurotransmitter
chemicals from nearby neurons. When a sufficient level of stimulation is
reached, the neuron cell membrane triggers a cascade of events leading to
a nerve impulse. This travels through the neuron and eventually triggers
it to release its own neurotransmitter chemicals at synapses where it
impinges on other neurons.
Each neuron works in this conceptually simple way. The complexity of the
brain arises from the fact that billions or trillions of neurons are
all interacting in a very complex network. But if we zoom in on any small
portion of it, we see that this simple, essentially mechanical activity is
all that is happening.
Now it is true that this model is not complete nor is it fully verified.
It could turn out that there are other important effects that are not
yet recognized. But most people would agree that this model is at least
logically possible. It might be wrong, but equally it might be right.
There would be no logical inconsistency if it turned out that brains
and neurons actually do work this way, that underlying the complexity
of brains, with all their sensations and qualia, are these simple neural
interactions.
I am not sure whether Brent believes this. It almost seems that he is
forced to say that this model must be wrong, that by pure introspection and
reasoning he can conclude that there has to be more to the brain than this.
The reason is because he does not accept the obvious conclusion of
the neuron-substitution thought experiment. Given that neurons work
as postulated above, it should be possible to replace a neuron with an
electro-mechanical device. It works identically to biological neurons at
the inputs and outputs, sensing and emitting neurotransmitter chemicals.
But inside it is a computer, which is designed to exactly mimic the
behavior of the biological neuron it replaces. When the inputs reach
certain thresholds, it waits for an appropriate delay to simulate the
travel of the neural impulse over the body of the biological neuron,
and then triggers its output mechanism to release neurotransmitters in
the same amount and timing that the biological neuron would have done.
Substituting this electronic neuron for a biological one cannot change
the behavior of the neural net. If we imagine two identical brains, with
one of them having a biological neuron replaced by an electronic one,
they will both operate exactly the same. All the patterns of neural
impulses in the two will be identical. In particular, everything the
two brains say and do will be the same. It will not be the case that
one says that it can see normally, while the other says that it seems
to be blind. That would be a difference in speech patterns which would
require different patterns of nerve impulses in the brain. But no such
difference will occur because the substitute neuron behaves identically
to the real one as far as patterns of input and output go.
Even if more biological neurons are replaced, and even if we forego
the need for chemical neurotransmitters at synapses which involve only
electronic neurons, there will still be no changes in the patterns of
activity. The two brains will remain identical in their nerve firing
patterns. Each time a neuron fires in the purely biological brain,
the corresponding neuron will fire at exactly that same instant in the
partially electronic one. So it will not be possible for the two brains
to report any differences in what they perceive.
Some philosophers suggest that what will happen is that the consciousness
in the partly electronic brain will get "out of sync" with the brain
firing patterns. It will notice that the qualia are gone, but somehow
it won't be able to report it. Apparently it will have lost control of
its mouth. This would presumably lead very quickly to a total disconnect
between the true consciousness, which is panicking at having lost control
of its body, and some other sort of simulated consciousness, which seems to
be going about its life quite normally. This would then imply that
consciousness apparently has virtually nothing to do with brain activity
since they can behave so independently. Most people will not go so far
into this form of dualism.
I think that Brent will suggest instead that the thought experiment won't
work. It would be impossible to substitute an electronic neuron for a
biological neuron without disrupting the brain's activity. We have to
assume that our understanding of the brain's mechanisms is incomplete, and
that with further understanding we will find that there are more complex
aspects which no electronic neuron can capture, since it is only modelling
a real biological neuron and is not actually the same as one.
This is a very strong assertion. It's not just a matter of saying
"we may be wrong", it's saying, "we must be wrong". It is saying that
purely by introspection and thinking about the nature of consciousness,
we can _know_ that there is more to the brain's function than the simple
model of interacting neurons I summarized above.
Personally, I don't find this very convincing, so perhaps my extrapolation
from Brent's postings is wrong. I am curious to know whether this does
in fact reflect his views.
Hal
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:45:09 MST