From: Harvey Newstrom (newstrom@newstaffinc.com)
Date: Wed Jun 23 1999 - 14:27:00 MDT
Hal <hal@finney.org> wrote:
> This is where I think I have confused you, because I tried to describe
> another kind of experiment and you were still thinking in terms of the
> first one.
Yes, I think I got confused trying to follow too many examples at once. My
fault for using a primitive meat-brain. One of these days I plan to upgrade
it.
> The overlaying supposition I am trying to support is that none of
> these explanations where consciousness is based solely on information
> hold water. All have serious inconsistencies and all are implausible
> if you look at them closely enough. There must be something wrong with
> our fundamental reasoning on these problems.
Agreed. I think information alone is like a book. It contains data, but it
is not "functional". I also have recently realized that a function brain
with no history, memory or experience is also not functional. I think we
need both hardware, a operating system, and a database (to use computer
analogies). Maybe all three are required. One or two of the three is not
sufficient to simulate a consciousness.
> Does that help give you an overview of what I am trying to do here?
Yes. Thanks for your patience. A communications failure like this often
leads to unnecessary disagreements, when actually there is not a
disagreement on ideas, but a difference in perception as to what is being
discussed.
> You have to go through the steps of the thought experiment and the
> reasoning behind them to see how we come to the conclusion, you can't
> just deny it because it seems implausible. You need to reject the
> premise of the experiment, or one of the steps involved.
Actually, I do disagree with this methodology. Sometimes we can't detect
the exact moment where the failure occurs, but we still can predict that
there will be total failure by the end of the events. If I bleed to death,
you can't predict exactly which drop of blood loss will be fatal, but the
sum total effect is predictable. I had the same problem with your
experiments. I think the end points are pretty well defined and can be
debated. The intermediate examples are fuzzy and are difficult to debate.
I don't think this invalidates the final conclusions. I think it merely
shows that we do not have total and perfect knowledge of the entire system.
> The premise is that if you run a brain (or perhaps a brain simulation)
> through an experience where you supply the inputs and it processes
> the data normally, it will be conscious. I think you will agree with
> that. We also have to assume that the brain can be made to run in
> a deterministic way, without any physical randomness. This would be
> more plausible in the case of a brain simulation but perhaps it could
> be arranged with a real brain.
I agree that a brain that processes inputs normally should be defined as
conscious. I disagree that brains are deterministic and will respond to
identical input with identical output every time. The internal state of the
brain is not the same. If you override a brain's internal states to the
point that they are controlled by external minds and not by the brain
itself, I begin to doubt that the brain is conscious.
> It follows that if you run a brain through the same experience twice,
> with the same initial state and the same inputs, it will be conscious
> both times. That follows because it satisfies the conditions of the
> premise, and the premise says that whenever you satisfy those conditions
> it will be conscious. It is an intermediate conclusion of the argument,
> and you have to reject the premise to reject this conclusion.
This I agree with, given your assumptions. If you don't force the brain to
think each thought at each step of the way, but rather let it process the
input on its own, then I agree it is conscious.
I am not sure I agree with the assumption that the same input into the same
brain will always produce the same results. Most theories of creativity
involve randomizing factors in the brain. Maybe the left brain will always
come up with the same response, but I believe that the right brain uses
randomization to enhance creativity. You can force its random factors to
replay as well, but you would have to do this at each step of the thought
process. Suppose your input was, "name something Clinton hasn't screwed
up"? Would the brain produce the same output every time? This question has
no determinate answer. I think the creative randomizing brain would think
up something different every time.
> We now introduce the notion of making a recording of all the internal
> activity of the brain during the first run, each neural firing or whatever
> other information was appropriate depending on how the brain works.
>
> We then add a comparison during the second run of the activity generated
> during that run with the recorded activity from the first run. Imagine
> comparing this neural activity at each neuron. The two sets of data
> will match, by the premises of the experiment, because we are exactly
> repeating the first run.
I don't think this is possible. Neurons fire trigger each other by
releasing neurotransmitters into a liquid solution that exists between them.
When the concentration of chemical gets high enough, those neurons that
detect the concentration will fire. It would be impossible to make this
chemical diffuse across a liquid medium in exactly the same way every time.
Each molecule diffuses randomly. It will work roughly the same way, but its
exact precision is indeterminate. The restoration of these chemicals to the
neural stores are also random. The components to make the chemicals float
around randomly in the blood. They are picked up randomly as needed by
individual cells. There is no way that this supply and demand will always
work out the same. Different neurons will be supplied slightly differently
with each run. The only way to totally control the brain, which is 90%
liquid, is to control every molecule as it bounces around in the liquid.
I therefore think the example is interesting but impossible. I agree in
general with your theory, but in reality do not think it could ever really
work out the way you said. Your philosophical argument is whether the
repeated sequence is conscious if it is pre-determined. This is about to
get into questions of determinism and free-will, a religious debate. I
think the randomizing function will make each identical run unique, and each
brain will come up with unique responses, and each will be conscious.
> Finally, at each neural element we substitute the recorded data for the
> actively generated data. We block the actively generated signals
> somehow, and replace them with the signals from the recorded first run.
> Since these two signals are completely identical, it follows that there
> will be no difference in the behavior of the brain. Substituting one
> data value for another identical one is of no functional significance.
This is the point where I am positive that the brain is not conscious. To
do this, you must suppress all of the brains own neuron firing, and control
every neuron externally. True, you can make the brain act like it would
have anyway, but you can also make it act unnaturally. The
control/consciousness of this brain is now in the hands of the programmer
controlling each neuron, and not with the brain. The brain has become a
meat puppet.
Of course, these answers seem obvious to me. You seem to think that if both
brains act identically, how can one be conscious and one not be? I think
you are not going back far enough to the roots of where thoughts come from.
The information being processed by the real brain comes from inside. The
information being processed by the meat-puppet comes from outside and must
be transferred into the brain somehow. Yes, we can make the information
coming from the outside look like the information on the inside, but it is
coming from a different source. That source is the external brains. This
controlled brain will never act conscious if the external brains don't cause
it to do so. It is not an independent consciousness that can act
independently from the other brains. If the experimenter has a heart attack
and dies, the brain stops working because its controller has stop giving it
instructions.
I seem to be repeating myself, but this seems obvious to me. If a brain is
self-directed it is conscious. If it only does what external brains tell it
to do, then it is not conscious.
> The point is that we took a situation where the brain was conscious,
> by the premise of causal connectivity based functionalism, and by
> substituting one set of signals for an identical set of signals, which is
> arguably no substitution at all, we produced a brain which is passively
> running a replay. So either this seemingly ineffectual substitution
> has eliminated consciousness, which seems hard to understand, or passive
> replays are as conscious as functional brains, which you deny.
You have to block the brain's own neuron firing mechanisms. You have to
prevent the brain from thinking on its own so that you can substitute the
replay. Whatever function or structure or data you are blocking to have
this effect is the root of consciousness. You block this source of
consciousness, and replace it with a stream of consciousness coming from the
outside programmers. It is not clearly defined what you are blocking that
prevents the brain from thinking on its own. I do know that cutting up the
neurons and separating them will have this effect of stopping thought. We
also call that step "death". I believe that you are first killing the brain
so that there are no brainwaves and it is not conscious at that point. Then
you are piping in the brain patterns as defined by external brains.
The problem with your whole theory, I think, is that you are blocking
something in the original brain, without clearly defining what you are
blocking. Then you are claiming that the replay brain has all the
consciousness of the original. If it does not, it seems clear to me that
whatever you blocked was a required component of consciousness.
-- Harvey Newstrom <mailto://newstrom@newstaffinc.com> <http://newstaffinc.com> Author, Consultant, Engineer, Legal Hacker, Researcher, Scientist.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:16 MST