Re: Can I kill a Copy? (long)

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Mon May 08 2000 - 09:55:17 MDT


On Mon, 8 May 2000, Harvey Newstrom wrote:

> I am trying to explain why killing this current version of Harvey Newstrom
> is not an acceptable way to give me immortality. Believe it or not, other
> people are arguing that it is.

This gets discussed a lot in "Flight of the Cuckoo" (that I've mentioned
before). I don't see an easy way out of it. In situations where an
"indentical" copy results from a destructive readout process, you really
are standing in a swamp if you object to it (which seems to be Harvey's
position [though I haven't read the entire thread]). Particularly if
the *only* way to make you immortal is to undergo a transformation
that is an integral part of the desctructive readout. As I pointed out
at Extro3, your chances of attaining "immortality" increase with the
size you distribute yourself over. As structured, we cannot distribute
ourselves at all. Distributed intelligence is required to avoid local
"accidents". (Interestingly, distributed intelligence is also "slow"
intelligence, so the more "immortal" you are, the slower your "clock" rate).

Moravec's article pointed out, I think, the need to avoid thinking of it
in terms of "me" or "my copy". More acceptable is to think of it as
"a pattern" and "copies of a pattern". Then you have the issue of
whether the patterns are "run", diverge and develop individual "rights".
Whether the destruction of one pattern to create an identical pattern is
an "acceptable way", presumably depends on personal taste.

Now, the way I think of it that works for me, is hoping that the nanobots
can sit on my neurons for many years, distilling information, sending
messages between my brain and my "mind" in my distributed computer
network. As the computers become increasingly powerful more and more of
my mind is in the machine rather than in my brain. Eventually my brain
has an fatal accident, dies of natural causes, gets put on ice or
donated to medicine by my computer side for teaching 1st year med students.
What I then do with my "mind in the machine" is another thread.
Certainly I can make inactive backup copies and distribute them.
Or I can distribute my mind over a large number of processors and
hardware architectures for safety.

Now of course, if you "play it safe", distribute yourself as much
as possible, that likely puts you at an evolutionary disadvantage
to people who choose to remain small and fast. So if a competitive
economic environment exists, rather than say a "moral" environment
where resources are distributed by agreement, immortality falls
victim to ruthless competition. Only in organisms at the "top" of
their ecological niches (or in rare cases those with extraordinary
defenses) get to develop the systems that can maintain themselves
over increasingly longer time periods.

So the only paths I can see where immortality is feasible are
(a) Grab all the resources you can as fast as you can;
(b) Develop a collective moral system where the society allocates
you resources and values your preservation; or (c) take a very fast
ship someplace where there are unallocated resources and you are
far enough away from people that you have time to build yourself
into an SI before anyone else colonizes your resource base.

I'm partial to (b), but I don't see an easy way to implement it
since you have to trust that nobody is pursuing (a) and humans
are not inherently "trustable".

Robert



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:28:30 MST