Re: Qualia and the Galactic Loony Bin

From: hal@finney.org
Date: Tue Jun 22 1999 - 20:24:10 MDT


Several people have pointed to the lack of connectivity and causality in
the separated-brain experiment as reasons to believe it is not conscious.

I think this illustrates that there are two models of consciousness
even within the functionalist/computationalist paradigm: the pattern
model, and the causal model.

In the pattern model, consciousness is associated with a certain pattern
of events, such as the pattern of neural firings in our brains. Reproducing
that pattern will cause the consciousness to exist again.

In the causal model, consciousness is associated with a flow of information,
an active processing of data. It is not enough to have neuron A fire and
then neuron B fire, but rather neuron A must *cause* neuron B to fire.
If we eliminate the causality, as we do in our separated brain experiment
where neurons are stimulated with pre-calculated patterns, there is no
consciousness.

I agree that the separated brain experiment does not pose a problem for
the causal model. It is intended to challenge the pattern model.

However, I think we have seen other postings here which do challenge
the causal model. Emlyn's long message today described a scenario
(his "1'") where it is hard to say whether causality is occuring or
not. Eliezer posted a message in April which raised similar issues:
http://www.lucifer.com/exi-lists/extropians/0103.html.

The general idea here is to arrange for a brain to experience a
conscious state a second time, to somehow reset it and then give it the
same inputs as before. If we neglect any non-deterministic behavior,
then the brain will go through exactly the same sequence of states.
Each neuron's firing pattern will be exactly the same as it was during
the earlier run.

However, armed with information from the earlier calculation of the
same data, we can begin to blur the lines between active calculation
and passive replay. Eliezer and Emlyn have both given examples of this,
very similar in flavor. You run the calculation, but instead of taking
the output from the previous neuron and sending it to the next one, you
instead substitute that neuron's recorded output from the previous run -
which will be exactly the same!

Now what are we doing? Are we actively processing data, or passively
replaying the results from the previous run? How can it make a difference
when we are substituting an identical recorded signal for a calculated
signal? These thought experiments pose difficulties for the causal model.

Those who prefer this model and don't find the separated brain experiment
to pose a challenge should take a look at Eliezer's and Emlyn's puzzles
and see whether they are as easy to resolve.

Hal



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:15 MST