From: O'Regan, Emlyn (Emlyn.ORegan@actew.com.au)
Date: Tue Jun 22 1999 - 23:53:08 MDT
>From Harvey:
> Take the example of the straw that broke the camel's back. You can keep
> loading a camel up with straw. You can add one straw at a time. I can't
> predict when the weight will be too much for the camel. But you can't put
> an infinite amount of weight on top of the camel before it collapses.
>
Point absolutely taken, and this reminds me of some problems that I had with
Chalmers' arguments about dancing qualia (slowing swapping components in a
structurally identical brain, which disagree about blue and red).
A similar example would be that you could take a brain, and remove a neuron
at a time. Does removing one neuron kill the brain/extinguish consciousness?
No. Then a brain with no neurons, by induction, is alive+conscious, if the
original was. Silly stuff.
I think that the problem lies in the discrete nature of the arguments -
consciousness is on or off. I would argue that there is a continuum, between
conscious and not conscious, which is more believable. Then the straws &
camels problem is dealt with, and a brain with no neurons can finally be
pronounced dead.
Also from Harvey:
The only acceptable upload I have heard involves replacing my
biological neurons one at a time with mechanical ones. Eventually,
I would
be in the mechanical robot and would feel it was me.
Or maybe "you" would slowly, inexorably slip away during the process, being
replaced by a new copy of "you" (by "you" I mean the conscious you, the bit
that means it when it says "I"). I am not convinced by this gradual
replacement argument that the conscious continuation of self would be
achieved. I am sure that the final self would report "Yes, it's me, it
worked", but would the old version of your self have experienced a
devestating demise? In tumbolia, no-one can here you scream...
--- I feel intuitively that causality is actually the basis of consciousness, that mere patterns are not conscious. The problem is that it seems to be very difficult to define the difference between a dynamic (causal) system and a ruddy great lookup table. Where is causality? At its basis, isn't it about fundamental particles/waves/energy/other messy tiny stuff interacting? When one neuron fires and that sends a signal to the next neuron, there are actually a bunch of subatomic things happening, one sparking the next sparking the next. Except as you get closer, the causal quality of these interactions retreats before you. Even if you can chain together causality at that level, and build up from there to show it at the level of cells/silicon/email, the isomorphisms are incredibly different and complex, between a chain of causality in a neuronal brain and one in a nanorobot copy of that brain, even more so in a software sim of the same brain. If you are prepared to go that far with isomorphisms, you'll probably find more isomorphisms between "causal" and "non-causal" intelligences which are no more complex. Is there a decent distinction? My apologies if this stuff has been covered before, but I think it is more interesting than g*ns... Emlyn (not the copy)
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:16 MST