From: Anders Sandberg (asa@nada.kth.se)
Date: Sun Sep 21 1997 - 12:29:54 MDT
"Prof. Jose Gomes Filho" <gomes@dpx.cnen.gov.br> writes:
> At 20:03 11/09/97 -0700, Geoff Smith <geoffs@unixg.ubc.ca> wrote:
>
> >.....................
> >I would say there is a self, but it is not a state. It is a history of
> >states and changes of state, with the potential for even more
> >history-making.
> >....................
>
> Using the avaiable language's auxiliary of all, named "mathematics", we
> could just say: self(t), t in ( -oo , +oo ) <considering, simplifiedly, t
> real and not complex...>.
Hmm, as a mathematician (well, I started out as a mathematician before
falling in love with brains and computers) I think this is too simplistic.
The above definition simply assumes that there is something known as
the self during all the time, which may change. But it doesn't tell
us much, and it is not tied to anything in the physical world.
My suggestion is something like this: self() is a function that
acts on the current state of a system capable of computing it
that produces something called 'sense of identity' (SoI):
self: state -> SoI
Note that as the state changes, so does the sense of identity. Hmm,
I see I have not been general enough here: the self-function need
not be universal, it is unique for each system (I identify myself
with my actions, you might identify with your memes and somebody
might identify with their body). So if we assume the existence
of some kind of abstract "superself-function" which for any system
gives us its sense of identity when in a certain state we get:
self: state x state -> SoI
This means that self(X,Y) is system X's estimation of system Y's
sense of self. In practice X can of course never calculate this,
only self(X,X), since it has only access to its own state, but
it can make an estimate of self(Y,Y) which of course doesn't have
to be the least close to self(Y,Y). Two estimates would be
self(X,Y'), an estimate of how one would feel if one was like Y,
and self(Y',Y'), an estimate of how someone like Y would feel
about itself.
Note that self(X,X) is history-dependent if the system has a
memory of its past. This information is included in its state X.
Also note that most people seems to assume self(X,X) never changes.
I would say this is because 1) self(X,X) is rather slow-changing
over time, and 2) because it makes a lot of sense to make self(X,X)
ones mental origo ('me') when one compares oneself with other
and potential selves.
Now, let's apply this to some transhumanistic problems. Let X(t)
be my state over time. self(X(t),X(t)) would be my sense of
identity. Through experience I know that I tend to identify with
my past selves at least a certain time T back, so we get (assuming
some kind of distance metric in the "sense of identity space"):
| self(X(t),X(t)) - Self(X(t-d),X(t-d)) | < epsilon, 0 < d < T
where epsilon is a constant and d is how far in the past I look.
In fact, I would say that normally the distance is less than
epsilon*d, suggesting that self(X(t),X(t)) is continous.
Notation: I will henceforth call
| self(X(t),X(t)) - self(X(t),X(s)) |, the distance between
me at time t and me at time s, as evaluated at time t for dist(t,s).
I notice that I can only evaluate self(X,Y) when I'm conscious.
When I'm not conscious I will not do this, but
self(conscious, nonconscious) seems to still be less than epsilon.
So I consider myself sleeping in the past as myself.
What about the future? My state X(t) is evolving, and it is
quite possible for dist(X(t),X(t+d)) to exceed any
bound if I'm really lucky/unlucky (depending on view). That
means I can become someone more different from my current self
than I am from a stranger. This frightens many people. However,
since X(t) is more or less continous and self(X(t),X(t)) seems
to be continous and fairly resilent to noticeable changes in
my body, mind and environment, it seems likely that barring
any surprises I will remain myself (as estimated by me
today) at least for some time.
If our states are evolving in a chaotic manner, which seems likely,
then dist(t,t+d) ~ exp(lambda*d), where d is the time in the future
and 0 < lambda our "identity lyapunov constant" (which may not be a
constant either, but let's keep things simple right now).
Since our past seems to become "not me" in the far past, the above
formula does not hold for d < -T, and we get a suggestion that X(t)
is not only chaotic in the positive time sense but also in the negative
time sense - i.e we have a whole spectrum of lyapunov constants
of all signs.
Alexander Chislenko suggested (in his excellent paper "Drifting Identities"
http://www.lucifer.com/~sasha/articles/Identity.txt, from which I
have borrowed many ideas) that we have an identity horizon,
beyond which we would not recognize versions of ourselves as ourselves.
However, these horizons need not be real: just like somebody falling
into a black hole doesn't see any event horizon, they might receede
as we move closer to them. Others may remain very constant -
I do not consider a cloud of ionized plasma to be me, and I doubt
I would do it even if I was standing close to an armed nuclear wepon.
So "death" can be considered moving across a horizon.
There are several kinds of death. The usual kind consists of having
our states X(t) move towards a non-living, highly entropic attractor
and loosing cohesion. Most people seem to think this part of state
space is delineated by discontinuities in state, but as anybody who has
actually seen another person die slowly knows, it can be a very
gradual process with no clear discontinuities.
In fact, this may explain why some dying people accept their death:
the horizons recede as they die, and they no longer consider their
inevitable death as a loss of identity. Compare this to the behavior
of Timothy Leary.
Another kind of death is "death forward": we change so much we are no
longer recognizable to ourselves, and become new persons. Note that
this already happens all the time: I doubt my 5-year old self would
have recognized me as I am now as itself: our appearance, values and
ways of thinking are simply to different.
dist(5 years, 25 years) > D_max. And the same goes for me:
I have a hard time identifying with the little human who thought
frozen puddles was a conspiracy and that jumping from a pier into
deep water to see what would happen very similar to myself, so
dist(25 years, 5 years) > D_max. Since different people X evaluate
self(X,Y) differently, some might regard all their previous states
(including some quite non-human states such as a blastula) as
themselves, while others regard only the latest as themselves.
Both are right, since they apply different evaluating functions
self(X, ) to their pasts.
However, in the future we might change even more dramatically,
by becoming immortal transhumans, posthuman jupiter brains or
open standards. I would guess that it is very likely that many
of the horizons will receede quite quickly as we approach them.
Some might remain, and that suggests that there can be jumps in
identity.
One such example is destructive uploading: our minds are digitized
in a destructive manner and a new entity, the uploaded version is
created. So, will dist(human, upload) be too large for us to
regard as ourselves? That seems to depend a lot on how we evaluate
it; some people identify with their physical body and might
hence regard the difference as immense, while others who identify
with their mind would regard it as smaller. There doesn't seem
to exist any reason to think the difference cannot be well within
the identity horizon for some people.
If uploading is to be regarded as successful, the upload should
consider itself to be the previous person: dist(upload, human)
should be small enough. "Small enough" is commonly suggested to
be roughly equal to the ordinary changes in identity during one's
life; as we have seen, the definition of "one's life" may be a
bit tricky since our remote pasts may actually be too alien.
Perhaps a better definition should be that the maximal allowable
change in identity should be on the order of the identity
changes during our self-perceived past:
dist(upload, human(t)) < O(dist(human(t), human(t-d)) for all d
so that dist(human(t), human(t-d)) < D_max(t).
Note that this can be far less than D_max(t), since most of the
past may have been rather unchanging, with the exception of becoming
the person in the first place.
Since the upload will have roughly the same mental structure and
hence evaluating capabilities, self(upload, human) ~ self(human, upload),
at least right after the uploading. After a while the distance
will likely grow.
Non-destructive uploading poses another problem: suppose a person X
is copied into an upload Y, are they the same person? The main problem
here is that people tend to get confused by semantics: there is a
difference between being an independent *being* with an individual
consciousness (I don't experience what anybody else experiences, and
neither do they experience what I experience), being an *individual*
with a sense of selfhood (i.e. self(X,X) exists), and being a *person*,
which is a legal term rather than a philosophical concept.
A conscious system is a being (let's ignore borganisms for the
moment), and likely also an individual and if it is lucky, a person.
Now, let's look at X and Y. Both are beings (assuming uploads have
consciousness), but neither will experience the experiences of the
other (footnote: a simple way of proving this is to run Y on a deterministic
computer with a deterministic environment (non-determinism can at
least briefly be emulated by a look-up table with random numbers):
since Y would by definition experience the same things each time the
"Y program" was run, it cannot experience anything X is experiencing),
so X and Y will be different beings. However, both X and Y will
evaluate their selves self(X,X) and self(Y,Y) to almost the same
sense of identity, so they will be the same individual. Legally,
they might be anything and change personhood by changing jurisdiction.
So, it seems that if a person is copied (xoxed, forked or something
similar) we will end up with a number of different beings, but the
same individual. These beings will of course diverge at a rate
determined by their lyapunov constants, and in the long run become
different individuals.
Finally, what about merging (as described in Greg Egan's short story
"Closer" http://www.midnight.com.au/eidolon/issue_09/09_closr.htm)?
In this case two beings X and Y merge to form Z, a composite being
with parts from both and possibly new emergent properties. It is
not obvious how large dist(X,Z), dist(Y,Z), dist(Z,X) and dist(Z,Y)
would become. A wild guess is that since Z would contain at least
some of the identity of X, dist(X,Z) would be on the order of
dist(X,Y)/2; this is likely more than most people would accept as
themselves, so it seems likely Z is regarded as a new individual
by X and Y. Z, on the other hand, can trace its past through the
life of X and Y, and might after a while with its new valuations
regard itself as a continuation of both X and Y with a preserved
identity.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:56 MST