Date: Mon, 8 Jan 2001 03:21:56 -0500 (EST)
From: Dan Fabulich <daniel.fabulich@yale.edu>
Subject: Re: Placebo effect not physical
Hi Dan
Finally got round to rest of reply to last Monday's mailing.
I should mail you a (gratis) copy of the MVT 1hr. film,
Strange Case of the Third Eye. Maybe this will clear some
points up ..... contact me off-line if you want me to mail you
this .......
S> You are also committing a well-known error by conflating "exists"
S> with "physical" (the same strategy that was used in one of the
S> well-known arguments for the existence of God).
>I don't know that one. I know quite a few of these, but it'd be
>interesting to hear another. Or maybe you have a different gloss on
>an old argument?
The argument is something like "Anything that can be thought of
could exist, God is the greatest thing that can be thought of, there is
a necessity that there is a "greatest thing" that can be thought of,
therefore God exists."
S> Absolutely .. under *your* definition of physical MVT is physical!
S> I just don't find any value in your definition.
>I raised that point to emphasize that MVT is *not* physical, and,
>therefore, if physicalism is correct, MVT is wrong. My definition of
>physical does *not* include non-verifiable mental objects.
I fail to see any distinction between your "physical second-order
concepts" and "non-verifiable mental objects". We are talking about the
same phenomena, but you pre-fix "physical" to everything. So my
"mental event" is your "physical mental event." I hate philosophical jargon
and so try with MVT to stick to discussion of the phenomena at issue.
Neither of us can verify our thoughts ultimately .... the idealist claims
that
our thoughts "verify" us. (I don't want to get bogged down in discussing
Prof. N . Malcolm's Verificationism here by the way).
S> But your physicalism "denies that symbols are anything more than their
S> physical part. e.g. Writing is nothing more than scratches on paper" but
S> now in response to my point you add a qualifier about "USE" made by
S> them, which seems an intentional quality rather than anything physical.
>I was *correcting* myself. Note the use of "forgive me." While use
>is "intentional," "intentions" are properties of physical objects, and
>are thus physical properties.
So are all "properties" of a thing the same as the thing itself then?
Or perhaps a "property" of a thing, like engine noise is a
property of motor-cars, extends further than the physical motor-car,
perhaps a property of the motor car is the worry caused by the owner
because the road-tax has run out .. how far do we want the term
"property" to stretch?
Perhaps physical objects are themselves properties of (hidden)
mental objects?
>There's no non-physical thing at work
>here. Just physical brains and physical bodies emitting physical
>sounds and making physical scratches on paper. The scratches on paper
>bear physical relations to other physical objects.
Yes, I know you always come back to this stark epihenomenalist claim,
but I thing this is only a partial description of the world. I think that
you
need to explain the conscious mechanisms involved in translating from
the purely physical scratches to the whole rich and varied sentient
understanding.
You could accommodate the phantom (felt, not cellular) median eye
within your taxonomy as a "property of a once-present physical
median eye" without calling it a non-material object. Does this help?
> >Properties of a non-physical object? No good. As far as I'm
> >concerned, ideas may be properties of physical objects, or
> >second-order properties of properties of physical objects, but at the
> >end of the day, it must be a physical property.
>
S> But this is pure conjecture, just your "idea", what evidence do
S> you back this with? I can back up MVT with scientific experiment
S> and observation.
>Excuse me? Reread the claim that I made. I said that there are only
physical objects and properties of physical objects. You do *not*
>have an experiment to show that there are non-physical objects. You
>might think that some of your experiments make it natural to assume
>that there are some non-physical objects, but I happen to find that
>notion rather unnatural.
I want to say that the phantom pineal eye more than an "idea of a
physical pineal eye" ... in the way that such an idea might take a
few seconds to have and then pass away. If you want to call it an
"idea" or a "property of a historical pineal eye" I don't care ..... but
the fact is that the idea is more pervasive/ persistent, not only because
it is more profound, but that it is the source of any other idea/ property
of a physical object.
During the mammalian-reptilian interface, when the pineal eye was lost in
our ancestors, huge tranches of behaviour that had been governed by
photo-cellular and endocrinal effects that originated with solar radiation
and were received by the pineal eye receptor became *internalised* to
control by the individual (warm-blooded) animal. E-2 conscious (experience
behaviour, or epiphenomenona .. take your pick) originated at this period
and because of biophysical evolutionary changes.
You cannot say "which initiated which", because the conscious changes
(new REM associated brain states due to phasic transients in particular)
and the physical changes occurred *concurrently*
>Every part of the brain can be observed, in principle if not today in
>practice. (You're not a Penrosian, are you?)
No, though I touch on this in the film.
> The brain follows all
>the ordinary physical laws, and we can watch it do so under a
>microscope. Some aspects of brain functioning are opaque to us today,
>but as nanotechnology develops we'll be able to observe the details
>even more closely.
I think some further *analysis* needs to be conceptual in addition to
building bigger and better microscope. MVT is conceptual as
well as physical, in that some claims can be tested empirically
but that some claims are, necessarily, inferred. These claims rely
on triangulation of circumstantial evidence from different scientific
fields, explanatory and utility functions, and 'pattern completion' or
'elegance.'
>The brain "cannot be observed" only in the sense
>that the workings of the Pentium III cannot be observed: they're both
>very small and hard to watch closely with today's technology. But
>they're both Turing equivalents. (Equivalents. Not machines.
>Equivalents.)
Neurons cab be observed, and these function more like
Rosenblatt's Perceptron architecture than ant Turing model.
Sure, the PIII has a strong analogy (equivalence) with Turing model,
but brains have stronger analogy with non-Turing or algorithmic models,
for instance Back-Propagation seems a more mathematically exact
equivalence than Turing-Church. So I disagree with you on fact.
>No one can present a case so water-tight that you can't interpret it
>in some perverse way.
So you say, but I think that MVT (together with applications from it
such as my new, Reverse-Hypnosis(TM)) should be so persuasive
that nobody would seriously bother to reinterpret it perversely ... a
strong argument should be tight enough that any perverse interpretation
of it would seem ridiculous.
The blunt facts and argument have to hold up as well of course, so
I invite and appreciate counter-arguments such as inthis thread.
It is hard to reinterpret Darwin's main points about fossils supporting
an evolutionary theory of history ... but some rabid fundamentalists
argue that we cannot be sure that the fossils weren't planted by Satan
to fool the scientists! Same with MVT ..... why don't you try to come up
with a perverse interpretation of the same empirical studies?
This is a matter of interpretation so as to
>maximize correctness. I'm not saying that at the end of the day you
>need to agree with me, but you need to interpret what I say such that
>it is as agreeable as possible. This may turn out to be less
>agreeable than your own position, but until you do the earlier work,
>you'll never know. It's easy to find dumb interpretations, and if you
>stop there you'll never understand your opponents' arguments.
Sometimes I use reductio ad absurdum with your points, but it is
not to be disagreeable, there will be a serious point in there somewhere.
I do understand pretty much where you are coming from, yours is a
standard conventionalist position ... or do you have some unique spin
that I have missed altogether?
S> There are records of young car crash victims who lived an apparently
S> normal life, but on autopsy it was discovered that only 5% of their
brains
S> had been functioning! If brain damage occurs at an early enough age
S> it is plastic enough that any part of the brain can take over functions
of
S> any other part ... so you are wrong in fact on this issue.
>I knew this, and nothing I said contradicts this. All I insist is
>that the brain has a non-plastic property or two, that it's not protean
>in quite *every* way. The characteristics which it can't change are
>the hardware. The ones that it can change are the software. The
>ardware of the brain happens to be Turing equivalent. Some of the
>ardware of the brain, like the laws of physics, can't break at all
>and so it's obviously nonsense to talk about self-repair in those
>ases). But some of the hardware of the brain can break in a
>on-reparable way. When it does, the brain is as helpless as a Turing
>machine with a broken servo. Hardware need not be fragile, but the
>rain does have some.
Yes, obviously. The brain has both finite-state *and* infinite-state
components ... but you only need one infinite-state component to
give whole circuit infinite-state (reconfigurable) capabilities!
S We have already decided that *real-time* responses cannot be
S simulated.
>irrelevant. The limitations to which I refer are not speed-related.
>I'm happy to admit that neural computers can outstrip serial Turing
>machines in speed in solving some problems. (Though not all.)
Speed isn't the issue. One system works, one pixel or bit at a time
in lock-step ..... the other works with groups or all pixels or bits at
once, and in non-linear bursts. How more different can you get?
S> Neither could a big Turing machine have evolved.
>Also irrelevant. Being evolved or not has nothing to do with
>behavior, nothing to do with today's limitations. You're telling me
>properties they don't share. I agree with you about these. I'm
>telling you some properties that they DO share.
Yes, but you seem to be admitting that my analogy to brains
explain *all* or *more* properties than your model, which you
concede only shares *some* properties with brains. Thus your
analogy is the weaker and can safely be discarded.
S> The brain can develop entirely new modules (eg. the neocortex)
S> in response to changing environmental demands, how would your
S> Turing machine do this?
>You'll hate my answer. The brain can't develop entirely new hardware.
>Entirely new physical layout is not hardware: it's software. Hardware
>is the part you can't change, by definition. Neither can the brain
>develop software beyond what its hardware allows, neither can the
>brain develop software in some non-deterministic non-random way. So,
>what can the brain do? Develop "new" modules following the rules
>encoded in its hardware. The Turing machine could do that, too, if it
>were fast enough. (And, again, speed is irrelevant to my claims about
>general limitations.)
You're correct -- I do hate your answer! But as we discussed on plasticity,
not all modules of the brain are composed of the same types of cells, and
different, earlier systems might not be plastic in the same way as other
parts,
whereas your Turing machine can only manufacturemore of what it knows,
NOT mutate different types of cells and functioning for new behavioural
demands. If you come back with "What about G.A.'s" my response is
that Genetic Algorithms require a serial computer to run on, but the only
types of circuits we find in the brain are massively parallel distributed!
>They only need to have a few characteristics in common to be "similar"
>in the way I mean. As far as I'm concerned, transistors and wooden
>rods are "equivalent" for the purposes of this conversation: the rods
>of Babbage's engine are equivalent to the transistors in modern-day
>desktops. They have little in common (and heaven knows that no
>wooden-rod computer could have evolved under normal circumstances),
>but just enough to make my point. The computers themselves are
>analogous to individual neurons, but the arrangement of neurons is
>analogous to the connection of many computers, of many Turing
>machines, which is equivalent in behavior to one big fast Turing
>machine.
Behaviour, but what about the conscious experience? Wood and
metal do not share properties with organic systems. Furthermore,
how do your (non-growing, non-regenerative) Babbage engines
manage to lose a major part of their structure (pineal eye, which
was shared by all (or virtually all) vertebrate species at least as far
back as the jawless fishes of the Ordovician age, and yet not only
continue to self-program and function in a tough, learning environment,
but actually 'improve' and expand their behavioural repertoire beyond
the original system constraints?
>If anything, you might try poking holes at my claim that many Turing
>machines hooked together are equivalent to one big Turing machine
>(since, of course, the one big one couldn't do what the many little
>ones could as quickly, or in quite the same way) but the analogy
>between a neuron and a Turing machine is pretty strong.
No, hadn't thought of this. I don't think complexity = self-awareness,
so the number of Turing machines is beside the point ..... I think
that to recreate or model the world "virtually" requires part of the
conscious system itself to be "virtual" ... that's all MVT
stronger claim really is .. pretty value-free on Physicalism Vs
Idealism and all other second-order theories that come from
our "minds." MVT answers the really important basic questions,
the rest (of academic philosophy) is just *argumentative froth.*
S> But only "materials" can be verified (molecular descriptions?) by
S> science, whereas you widen the claim for physicality to everything,
S> even principles, which cannot be verified as materials. No way.
>Sentences *are* physical. I can not only point sentences, I can even
>drop them on my foot. You KNOW something's physical when you can drop
>it on your foot.
MVT is more interested in the medium itself than particular forms or types
of message ..... a sentence engraved on stone can be dropped on your
foot, but not one spoken and heard over the radio airways, or voiced
silently
and internally by the reader. And it is these internal states, and how the
brain
co-operates in bringing them about, that is at issue between us.
S> Yes, ditch the philosophy by all means. It is quite inadequate.
S> I argue for a post-human aesthetic that can embrace the
S> powerful new vocabulary and world-view that MVT offers. The
S> many various mental phenomena can all be described using
S> the evolutionary narrative (scientific) offered by MVT, whereas
S> the old philosophical jargon offers nothing. I disagree with you
S> that functionalism solves Leibnitz Law , by the way, on any
S> level other than by linguistic contrivance. MVT offers a constructive
S> reconciliation by means of the virtual generic sensor as a bridge.
>Here, I think, you just referred to an argument which you might
>present, but didn't actually bring one to the table. Obviously a
>functionalist might say the same about MVT.
Sorry if I have not made my point clearly. It is for "identity" as
opposed to the much weaker "equivalence" that you seem happy with.
MVT is the Rolls-Royce/ Supreme stretched Cadillac of theories in
that it tackles Leibnitz' objection to Descartes *full on*. No
substitutes accepted .... I say that the phantom median eye
IS identical with the experience of consciousness, not just an
equivalent substitute or an analogy.
Full equivalence (= Identity) requires INTERCHANGEABILITY.
S> God is a fictional character .... he cannot be pointed to so is not
S> a physical object ... but how is he a "property of a physical object?"
>Fictional characters aren't properties of physical objects. They
>aren't anything at all.
Then how can we discuss them at all?
>We accept some truth claims about them, but
>that's a relationship between us and the sentences, not between
>anything and the characters.
But sentences can be broken down into participles, words, tenses,
objects/ characters &c. They are not inviable units. If you ask someone
what they thought of Leo 'Sprout-face' di Caprio's character in Titanic
they can't respond to you by talking about the "sentences."
S> What you call a "physical property" could equally well be described as
S> a "concept."
>The words "property" and "concept" are often used interchangeably, but,
>of course, one of them is held by the object with nobody else around,
>the other is something that we have relative to the object.
But if "nobody else" is around, and the concept doesn't happen to be
written down, then how under your schema does it "exist?"
S> We also cannot describe what "Laws" are in operation at the point of
S> singularity in a black hole.
>So what? There are no black holes in the brain. The laws of physics
>operate just fine there.
But if physics can't reveal to us a person's inner landscape then
either (1) it isn't in the brain, or (2) the physical laws aren't adequate
and some Phenomenal Laws (MVT) that build on the directly observable
effects is needed in addition, or (3) there is nothing happening other than
billions of electrochemical impulses, and the person has no felt or
perceptual experience.
The Laws of Physics fail by omission as a "complete" or unified
field account that includes mental events. Maybe MVT can be incorporated
in the current Laws of Physics, or maybe it will amend them or even
point to entirely new ones ..... I don't know yet .....
>Deep Blue can and does internally model the board before making a
>move. (Will you at least grant the machine that? It literally has
>symbolic pictures of the board stored in its RAM. That's a model,
>by anyone's account.) But no one's mind affects Deep Blue when it
>does this.
Yes, but what I don't accept is that either the E-2 brain or a neural
computer has symbolic pictures of the board stored anywhere.
>Look, you've got interchangability problems, too. If the pineal eye
>is a feeling, then it's interchangeable with mental statements, but
>it's not interchangeable with claims about cellular material.
It is with certain claims about cellular (pineal eyes). For example,
it fills the same shape/ field of vision as the cellular organ, pivotal
between the external and internal worlds. The shape of the cellular
brain arose because of co-existence with pineal sensory input over
the millennia, and there is a physical foramen as well as pathway gaps
in the shapes and sulcesses of the brain, channels for passing pineal
data back and forth, and connections at deep levels of the brain (even
pre-dating optic lobes) for pineal sense-data processing.
The phantom arm can cause behaviour to the person in exactly the same
way as its physical template ... experiments prove that patients will
instinctively act (to cover face with phantom arm as they used to with
physical arm, or complaining that a shoe fitted over the foot of a wooden
leg feels too tight if, and falling over by "forgetting" that leg not longer
present.
I argue that as we go on and evolve over time, variation from the cellular
template or blueprint of mind will increase, and 'self' control take over
even
more from environment, external signals and solar/ physical constraints.
S> But if you claim this, then "brain states" and "thinks" are the same
thing,
S> so you can substitute one for the other in any sentence without changing
the
S> meaning? Is this what you are saying?
>They are, of course, different parts of speech.
Different sentences entirely then?
>To be precise: "is
>thinking" is interchangeable with "is in certain brain states." "is
>thinking about X" is interchangeable with "has certain brain symbols
>which refer to X."
Then how we do actually describe that someone is in particular
brain states (for example, I might want to say that their left
pre-frontal cortex seems particularly active today, but all you can
describe to me are the thoughts going through their head about exams
or whatever ... do you see my point. Not interchangeable.
S> How do I measure "meaning" ..... what tests can I do
S> and what physical measurements do I take, using
S> what instruments?
>Lexicography. Sociology. Anthropology. Meaning is use. You look
and see how people use the words. You write it down. You publish
>articles about it. You participate in the writing of dictionaries and
>encyclopaedias. Did you not notice that Linguistics is a science?
Meaning sometimes is associated with uses of particular knowledge,
but not always and not necessarily. You are talking about applications
from concepts/ meanings ... and even some of these uses might not
be measurable.
S> I disagree that a serial simulation is identical with a physical,
parallel
S> instantiation ... they may exhibit behaviour that we think is equivalent,
S> but one is algorithmic and provable at any point, and the other is not.
>Both are algorithmic, but one has a more complex program than the
>other. Look at actual parallel neural computers. There is absolutely
>an algorithm which completely describes their behavior. It's a bitch
>to write down, but it's obviously there. Stringing together
>algorithmic components makes for a more complex algorithmic object,
>not something radically non-algorithmic.
There are quite a few mathematical formulae that describe how
neural computers can be instantiated on serial computers ... but these
algorithms are just to get the things simulated. The actual brains or
clusters of networks do not have any identifiable algorithms, or even
internal states that correspond to particular symbols more often than not.
Are you talking about these mathematical equations as algorithms?
S> Of course, reciprocally, there is not task that can be performed by a
S> (dumb but fast) Turing machine or von Neumann that cannot be done
S> using parallel distributed architectures ..... but I do not claim that
S> somehow von Neumann processors are REALLY neural computers,
S> or that neural architecture has *anything whatsoever* to do with their
S> design or evolution. They are just a (less optimised) alternative!
>Turing machines ARE neural computer equivalents. But that says more
>about the neural computers than it does about Turing machines.
Are you just saying that Land-Rover are Suzuki 4-wheel equivalents, or
that a bicycle is a Land Rover equivalent because it has wheels and
transports people along a road ... you must take my point that Identity
is what we need, not (ill-defined) but lesser *equivalence*.
S> The medical literature on why (in ALL cases of recorded traumatic
S> organ loss) we have phantom sensations explains this point adequately.
S> There is a necessity about such phenomena, not a voluntary choice.
>Again, I agree that people feel that way. You don't tell me anything
>about the mechanism by which people feel. You tell me about the
>mechanism by which people act like they're in pain. But you can't
>show me how the pain links up with the brain states.
I refer you to the Gate Theory of Pain. I actually think that pain, since
it comes through nerves at the skin, has different attentional mechanism
than reflective thought, where the MVT model has more to say. It is
more of a reflex experience than consciously (internally) generated,
but, sure, hypnosis can work as anaesthesia, so some is evoked
centrally rather than just peripheral to the damaged limb/ skin.
S> The neurosignatures are generated along with 'phantom eye'
S> information thereby identifying this information as "self" originating,
S> but distinct from signals originating at the retina and so on.
S> I accept Melzacks' neuromatrix theory of self, by and large.
>Again, this is not enough. Identification as "self" originating might
>be the same as feeling, depending on what you meant by
>"identification." If it IS the same as feeling, you've left it a
>mystery HOW this "identification" feeling happens, which is different
>from (ta-da) identification brain states. If it isn't, then you've
>left the feelings mysterious and told me all about brain states.
>No solution.
No, the neurosignature recognition is pre-conscious (not revealed
to consciousness). This checking is pure brain-state. We are just
left with the "feeling" or consciousness, the background processing
is transparent
to us.
S> >Neural computers, all of them, are Turing machine equivalents.
>
S> No ... as discussed above, I can simulate all the behaviours of a
S> Turing machine on neural systems, but they are not identical.
>Read what I said. Read what you said. I said "equivalent." You said
>"identical." You ignore what I say and knock down a straw man who
>thinks that they're identical. What's the point? Pay attention.
>"Equivalence" and "identity" are radically different relations.
Yes, it is exactly that you can only show "equivalence" and not
"identity" that makes your theory a Straw Man. I want to show
that MVT is more complete and better than competing theories,
I don't necessarily have to disprove your Turing theory.
>To be "equivalent" is to have one relevant property in common. To be
>"identical" is to have ALL of one's properties in common. If you say
>"I need a green thing! bring me a pear!" and I bring you a green
>apple, you say "this is not an pear!" I'll say "I know, but this apple
>is equivalent to the pear: they are both green."
>It is foolish to then say "but pears are different from apples!"
>because we both know that. The question is whether they are different
>in a RELEVANT way. They need not be identical to be equivalent. Get it?
I say they are different in several relevant ways ....... by MVT you
get what you see, and apple is an apple ... your theory is undiscerning
in that it seems a pear might do instead. I want to *identify* the
mechanisms
of E-2 consciousness, and intelligence, and not just to say it is something
like a theoretical Turing machine that never has, and never will, be built.
S> You accept that the Turing machine is running neural simulations in
S> lock-step (very fast, but not INSTANT) .. and so you fail on your
S> equivalence claim just on the real-time property alone .... and I
S> think there are further differences.
>The fact that they differ on one property, or two, or a hundred, or
>even more properties, doesn't imply that they aren't equivalent. They
>only need to share the relevant properties (which, as far as I'm
>concerned, are only one or two).
Surely better to share more properties (including all your "relevant"
ones) just for the sake of theoretical completeness?
>I'm definitely starting to tire here... as school starts up again, I
>may have to drop this end. We'll see how much longer I can go.
Yes, I agree .... why not slow the cycle of replies down to about one
a week? It isn't, I admit, an entirely pointless exercise for me as,
with your permission (?) I intend to include these postings as part of
the FAQ section of my Primal Eye/ MVT Encyclopedia soon forthcoming ....
Let me know about getting a copy of the film to you ...
Light in Extension
www.steve-nichols.com
Physician of Souls (Hypnotherapist)
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:19 MDT