Re: Is cryopreservation a solution?

From: Anders Sandberg (asa@nada.kth.se)
Date: Fri Sep 12 1997 - 08:48:57 MDT


Hagbard Celine <hagbard@ix.netcom.com> writes:

> Anders Sandberg wrote:
>
> > Holism can work the other way too: an almost correctly put together
> > brain would self-organize to a state very similar to the original
> > person as memories, personality, chemical gradients etc relaxed
> > towards a consistent state.
>
> It seems that some outside intervention would be necessary in this case.

No. Self-organization is just that - organization that occurs by itself.
Outside intervention isn't holistic in any sense.

> Is there any precedent for cellular self-organization? I mean to say, if
> the neurons are simply in the wrong place, what would make them move
> about spontaneously to their identity-creating positions?

Yes, there are precedent. Neurons do an amazing job of connecting to
the right synapses by following chemical gradients, especially during
development but also afterwards. It is known that synapses can branch
and split, making the new synapses seek out new connections with the
other neuron. How this is regulated is not yet known, but there is
a lot of cellular self-organization going on.

> Perhaps after
> having reconstructed the brain it would then be possible to move neurons
> about, trial-and-error, based upon where activity is occurring and where
> it is not, but otherwise, it seems against the second law of
> thermodynamics.

The second law of thermodynamics does not apply to an open system
such as a living system (if it did, then we could not survive as
ordered systems).

I would also like to point out that the likely damage from cryonic
repair is not that neurons are displaced (that does not really matter)
but that some synapses might be left out, some might be misconnected
and some might be completely new. This will change the topology and
properties of the network, which can be problematic. But the network is
adaptive, and will quickly remove the obviously bad links. What is
problematic is how changed the resulting network will be compared to the
original network.

> What would be the reductionist explaination for identity? Or for that
> matter, what are the other ways of explaining it?

What about Minsky's idea in SoM: identity is the mental model we
make of our slowest-changing parts?

> Of course. But, I am arguing that the higher levels (more complex,
> indeed) are still biological. How does one deduce the biological
> interactions that produce an abstract identity?

That is an interesting problem. Obviously, the level of one's explanation
should be usable, we do not try to explain everyday physics by quantum
mechanics or for that matter nanoscale chemistry in my body by references
to what I'm doing. My guess is that in time it will be possible in
principle to show how low-level brain processes produces higher-level
processes, and so on all the way up to the highest levels. It might
still be impractical to try to show the exact path from neural activity
to identity, just as it is impractical to deduce how a rope and pulley
can be explained using quantum electrodynamics.

> For that matter, what
> sorts of abstract properties do you know of that have emerged from
> non-abstract low-level systems? And are not these abstract properties
> more than the sum of the non-abstract system's parts?

Yes, of course! Nobody denies that. Just look at neural networks, where
complex behaviors emerge from very simple components, or flocking
behaviors of boid simulations, or the patterns in Game of Life.

What I'm getting at is that the emergence of higher levels doesn't
mean we need to keep lower-levels absolutely identical. Sometimes they
have to - change the rules of Game of Life and the behavior is utterly
different. Sometimes they hardly matter at all - the waves and spots
of activity in calcium-spiking thalamoreticular neurons (which may
be involved in dreaming) could just as well be simulated using the
same reaction-diffusion models used to model seashells or fur textures.
So the big question is if identity is so unstable that it will vanish
if we change the brain a slight bit, or if it is so stable that it
will remain even after large changes. I think it is the later, since
we manage to remain "ourselves" quite well despite constant changes
in our brains and that it is very rare for people to suddenly become
other people just because of neural noise.

> Correct me here, if necessary. It would seem you are suggesting that a
> pattern exists in the neural network of the brain, which if mapped,
> would allow us to fully repair a partially-reconstructed brain,
> including the pre-existing identity. I don't know enough neuroscience to
> comment, but what if there is no pattern? What if every neuron must be
> placed where it was in the original?

In that case identity would be brittle. If a neuron in your brain
dies, you become somebody else. To some extent that is true, but the
everyday change in identity is obviously acceptable to us.

It should also be noted that the pattern does not exist *in* the
neural network of the brain, it is the network (or rather, the
structure, weights and properties of the network).

> One consequence of a bottom-up definition of identity is the increased
> likelihood that we will have the ability to alter identity as we wish.
> (Ah, autopotency...) I like the prospects of that, although what happens
> when you make a change to your identity that makes you more likely to
> want to change your identity in a way that makes you more likely to
> change your identity? Hofstadter fans take note.

There might be identity attractors out there... (one obvious is
of course 100% bliss, you will not want to change identity after
reaching it).

> Hmmm. Reanimation has a very low margin of error then based upon the
> state of biostasis technology today. Pre-freeze cellular deterioration
> may make-up even more neuron damage than that, don't you think? In the
> absence of a pattern to work from in reconstruction, I don't give
> today's cryonics customers much of a chance at being themselves.

Well, judging from Mike Darwin's photos, I would say there is
plenty of pattern to work with. It certainly won't start up by itself
if thawed, but I think there is enough broken cells to give a
sufficiently clever repair device enough information to restore
practically all cells. The synapses (which IMHO are the most
important parts) looked quite fine to me.

> Your above point does make good sense in that evolution is likely to
> have installed some built-in redundancy within the network to avoid
> total incapacitation after brain trauma. What are your thoughts on the
> reasons for the stability of neural networks? Is there actually a
> reorganization to "carry the load," so-to-speak, like you mentioned
> above? Or can mere redundancy explain the bulk of it?

As most people know, the brain can handle damage surprisingly well.
Neural networks are massively redundant, and since information is
stored in the whole network instead of in individual cells, it will
not be very damaged if parts are broken. In the brain, other parts
also do take over damaged functions. I have a friend who managed
to overcome some rather severe cerebral palsy through training and
sheer stubbornness, a magneficient example of how the brain can
rewire itself.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:52 MST