From: Brent Allsop (allsop@swttools.fc.hp.com)
Date: Tue Nov 25 1997 - 13:16:26 MST
Hal Finney <hal@rain.org> continued:
> Each neuron works in this conceptually simple way. The complexity
> of the brain arises from the fact that billions or trillions of
> neurons are all interacting in a very complex network. But if we
> zoom in on any small portion of it, we see that this simple,
> essentially mechanical activity is all that is happening.
I'm sorry I don't have a reference, but there was recently a
discovery made of some chemicals that neurons can release that effect
large numbers of near by neurons, even ones that have no direct
synaptic connections. Any simulation would have to take into account
such chemical behavior and properly simulate it's changing effect on
the behavior of other neurons. The fact that we are still discovering
things like this still leaves open the possibility of many things.
> Now it is true that this model is not complete nor is it fully
> verified. It could turn out that there are other important effects
> that are not yet recognized. But most people would agree that this
> model is at least logically possible. It might be wrong, but
> equally it might be right. There would be no logical inconsistency
> if it turned out that brains and neurons actually do work this way,
> that underlying the complexity of brains, with all their sensations
> and qualia, are these simple neural interactions.
I'm not saying that what we know about the brain is wrong.
I'm sure there are some kind of, what Crick calls, "Neural Correlates"
to qualia. We simply don't yet know how these phenomenal qualities
arise from these Neural Correlates and how they unify themselves into
our phenomenal awareness.
> The reason is because he does not accept the obvious conclusion of
> the neuron-substitution thought experiment. Given that neurons work
> as postulated above, it should be possible to replace a neuron with
> an electro-mechanical device. It works identically to biological
> neurons at the inputs and outputs, sensing and emitting
> neurotransmitter chemicals. But inside it is a computer, which is
> designed to exactly mimic the behavior of the biological neuron it
> replaces. When the inputs reach certain thresholds, it waits for an
> appropriate delay to simulate the travel of the neural impulse over
> the body of the biological neuron, and then triggers its output
> mechanism to release neurotransmitters in the same amount and timing
> that the biological neuron would have done.
Red has real qualities. We directly experience Red. We know
what red is like. We can even come up with textual responses to try
to describe what red is and isn't like. Any machine that was
abstractly simulating the behavior of a consciousness that experienced
red, rather than actually producing conscious red, would have to have
very complex abstract representations to identically reproduce all the
qualities of red itself. Indeed, the fact that red is so phenomenal,
that it is so different than green, and even more different than salty
and all our other sensations, and the way it can all be unified into
one awareness model of the world, is what makes us so intelligent.
Abstract representations, though they theoretically could model this
powerfully intelligent behavior, I don't think they could do it
practically. You'd be programming a computer to lie if you tried to
make it describe what red was really like. And we all know that it
takes much more complexity to consistently lie, than it does to tell
the truth.
> Some philosophers suggest that what will happen is that the
> consciousness in the partly electronic brain will get "out of sync"
> with the brain firing patterns. It will notice that the qualia are
> gone, but somehow it won't be able to report it. Apparently it will
> have lost control of its mouth. This would presumably lead very
> quickly to a total disconnect between the true consciousness, which
> is panicking at having lost control of its body, and some other sort
> of simulated consciousness, which seems to be going about its life
> quite normally. This would then imply that consciousness apparently
> has virtually nothing to do with brain activity since they can
> behave so independently. Most people will not go so far into this
> form of dualism.
Yes, this is not at all what I'm trying to describe. All I'm
saying is the neurons, in some natural way, produce the conscious
qualia that we experience. That the information we consciously know
is represented by the complex models built out of these qualia in some
way. That the "quality" of the phenomenon is important and what
consciousness is. I am saying that any simulation would have to take
into account and represent all these qualities and their differences.
I believe that such would be only theoretically possible and not
practically so because of the rich diversity of all our sensations.
> I think that Brent will suggest instead that the thought experiment
> won't work. It would be impossible to substitute an electronic
> neuron for a biological neuron without disrupting the brain's
> activity.
I'd bet that there are phenomenal qualities that our
consciousness do not use. That there are sensations that we have not
yet discovered. I look forward to the discovery of other color qualia
so we can have something to represent wavelengths of light outside of
the current visual spectrum. We can abstractly represent light
outside the visible spectrum by mapping the wavelengths onto what we
use to represent the visible wavelengths, but this will not be
anything like, what actual different qualia will be like.
> This is a very strong assertion. It's not just a matter of saying
> "we may be wrong", it's saying, "we must be wrong".
I am saying that the common sense idea that the tree we are
aware of that we think is beyond our eyes is wrong is not really
beyond our eyes. The tree we are aware of is a tree constructed of
phenomenal qualia in our brain via certain not yet completely
understood neural correlate. This tree we are aware of only
abstractly represents the real tree beyond our eyes.
I am also saying that the idea that phenomenal sensations can
some how magically arise from "hypercomplex relatedness" is also very
wrong. What we know must be represented by something very real. Our
conscious knowledge is built out of this stuff, whatever it is.
I am not saying that our current understanding of neural
phenomenon is wrong. I'm simply saying there is a bit more that we
haven't discovered yet, and that the phenomenal quality of this stuff
is just as important as the abstract causality of this stuff.
> Personally, I don't find this very convincing,
What and where, then, do you think red is and what do you
think our conscious knowledge is represented with?
Brent Allsop
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:45:09 MST