From: GBurch1@aol.com
Date: Sun Oct 03 1999 - 11:12:24 MDT
- - - [ This is the first part of a VERY long post ] - - -
In a message dated 99-10-02 17:04:04 EDT, bradbury@www.aeiveos.com (Robert J.
Bradbury) wrote:
> So, after a month, I'm going to trip over the ropes back into
> the ring with ObiWan Burch regarding mind ethics.
I'm glad you did. As you said about my original post: This made me think
hard.
> > On Sun, 29 Aug 1999 GBurch1@aol.com wrote:
>
> > In reponse to my message message dated 99-08-27 07:21:42 EDT
>
> > A MORAL theory of mind (which seems to be what you're looking for) may
be
> > dimly perceived in this insight, as applied to the questions you've
> > presented. As a first pass at formulating such a moral theory of mind,
> > perhaps we can say that an entity should be treated as both a moral
> > subject
> > and a moral object TO THE EXTENT THAT it exhibits more or fewer of the
> > various distinct elements of "mind". As an example, consider a book or
> > computer hard drive as an instance of memory. Both are utterly passive
> > repositories of information, incapable by themselves of processing any
of
> > the
> > data they record.
Note that I am at this point proposing "mind" as a moral axiom. I think that
this is the most basic and least arbitrary possible foundation for a moral
and ethical system, an idea I've been exploring for a couple of decades. But
I think that some of what your post addresses is whether "mind" is a "good"
moral axiom, which of course is a redundant question: An axiom is just that;
a "given", a foundation below which analysis isn't "allowed" in the proposed
system.
Of course, it's perfectly legitimate to question the robustness of a moral
axiom, or whether logical theorems derived from the axiom yield absurd
results, which some of your questions do. But the view I'm exploring WOULD
entail some shifts in moral perspective from those commonly held by most
people today. Thus, what we might find to be an "absurd" result may well
change as we adopt new moral axioms.
This has certainly happened before in human history. Consider, for instance,
acceptance of individual autonomy as a fundamental moral "good" (if not an
axiom, per se) and perceptions of the institution of slavery or the legal
subjugation of women. Both of those things were vigorously defended by at
least some people who considered themselves to be moral. As a matter of
historical fact (as opposed to rational contemplation and logic), moral
systems evolve as different values have come to be held as more or less
fundamental and social practice is thereafter seen as inconsistent with the
newly valued basic principle. My proposal to make "mind" the most fundamental
moral axiom would involve such an adjustment, which we see in many of your
examples.
> > Thus, a book or a current computer storage device exhibits
> > only one aspect of mind (and that only dimly).
>
> Maybe, but the complexity is increasing rapidly. The next step would be
> anticipating what blocks you wanted based on historical trends and
> retreiving them before you request them. This is an attribute of a
> very efficient secretary.
Yes, and I propose that as the performance of the system begins to approach
that of a good secretary, you may well have to begin to treat that system
with as much respect as one would a simple servant. I'm not saying that a
simple system that anticipated your data needs based on past patterns would
be entitled to "wage and hour" equity, but that one would begin to view such
systems in moral terms; just as one does, say an anthill. We certainly don't
accord an anthill the status of moral subject, but we ought to condemn its
completely arbitrary destruction.
> > In the
> > proposed moral theory of mind, we do not consider these examples to be
> > very
> > significant moral objects or subjects; although, interestingly, some
> > people
> > DO consider them to be very slight moral objects, in the sense that
there
> > is
> > a slight moral repugnance to the notion of burning books (no matter who
> > owns
> > them) or, as has been discussed here recently, "wasting CPU cycles".
>
> In an abstract sense it seems that degree of "wrongness" of "burning"
> or "wasting" has to do with the destruction of organized information
> or using CPU cycles for non-useful purposes. I.e. contributing to
> "entropy" is the basis for the offence. This seems to apply in the
> abortion debate as well. I can destroy a relatively unorganized
> potential human (< 3 months) but should not destroy a much more
> organized human (> 6 months). This seems to be related to the
> destruction of information content. It also holds up if you
> look at when physicians are allowed to disconnect life-support
> devices -- when it has been determined that the information
> has "left" the system. Extropians may extend it further by
> recognizing that incinerating or allowing a body to be consumed
> is less morally acceptable than freezing it since we really
> don't have the technology to know for sure whether the information
> is really gone.
Yes, all of this follows, but the key in the idea I propose is the extent to
which the system in question is an instance of "mind" or potential mind. I
realize this begs the question of what a "mind" is, but that is the point of
my proposal: As we approach the threshold of what we may begin to call
artificial intelligence or "synthetic minds", we MUST begin to clarify what
we mean by "mind". Just as advanced religious moral systems caused people to
begin to explore what defined a "person" because they were "god's creatures"
or because they had the potential of "salvation" or "enlightenment", a moral
system based on "mind" lends a moral imperative to otherwise abstract or
scientific-technical questions of what a "mind" is.
> > From the proposed axiom of "mind morality", one could derive specific
> > propositions of moral imperative. For instance, it would be morally
wrong
> > to
> > reduce mental capacity in any instance, and the EXTENT of the wrong
would
> > be
> > measured by the capacity of the mental system that is the object of the
> > proposition.
>
> Ah, but you have to be careful if you require an external force to
> implement the morality. If I personally, wanted to have the left
> half of my brain cut out and subject myself to a round of hormone
> treatments that caused extensive regrowth of stem cells, effectively
> giving me a babies left-brain to go with my old-right brain, you could
> argue that I would be doing something morally wrong by destroying
> the information in my left-brain (actually I would probably have
> it frozen so your argument might not have much weight). However, if
> you go further and act to prevent me from having my operation, I
> would argue that you are behaving wrongly. After all its my body damn it!
> We seem to recognize this principle at least to some degree with the
> concept of the "right to die". This isn't generally recognized in
> our society but I suspect that most extropians (esp. the stronger
> libertarians) would argue this.
This is a good point, but one that moves, in my own philosophical vocabulary,
from questions of "morals" to ones of "ethics" and thence to "law", where
each step leads outward from the moral subject to society. As I use these
terms, each step involves APPLYING the one before and, thus, involves greater
complexity and a greater need to balance values. In particular, the value of
AUTONOMY, as a moral good, an ethical value and a legal principle, plays an
increasingly important role as a mediator of application of the moral value
of mind.
Note that I do not propose "mind" as the sole moral axiom, although I am
exploring the implications of holding it as the most fundamental one. It may
well be that "autonomy" as a moral, ethical and legal value can be completely
derived from "mind" as a moral axiom; I'm not ready to make such a
declaration. But one cannot, I am sure, declare a "complete" ethical and
legal system from valuing "mind" alone or in some sort of "pure" or unapplied
form.
> > Thus, willfully burning a book would be bad, but not very bad,
> > especially if there is a copy of the book that is not destroyed. It
might
> > be
> > more wrong to kill an ant (depending on the contents of the book with
> > which
> > one was comparing it), but not as wrong as killing a cat or a bat.
>
> Yep, the more information you are "destroying", the wronger it is.
> But we (non-buddists) generally don't consider it over the line to step
> on an ant and we certainly don't get upset over the ants eating plant
> leaves whose cells are self-replicating information stores, but if
> you destroy another person, or worse yet, lots of them, you have
> crossed over the line.
Here I believe you make a simplistic application of the axiom, when you use
the term "over the line". Your expression assumes that the world can be
divided into two zones; "good" and "bad". That is the methodology of moral
and ideological absolutism. Every application of morality to the world (i.e.
every ethical and legal problem or proposition) involves a balancing act.
Ultimately, we can only say that things are ON BALANCE "good" or "bad".
You are also interpreting my proposed axiom as if "mind = information". That
certainly can't be right since, as I indicated in my first illustrative
example, things like books and computer storage devices are instances of only
one or a very few aspects of "mind". This compounds the absolutist error.
> > > Q1: Do you "own" the backup copies?
> > > (After all, you paid for the process (or did it yourself) and
> > > it is presumably on hardware that is your property.)
> >
> > I don't think you "own" your backup copies any more than you "own" your
> > children or other dependants.
>
> But I do "own" my left brain and my right brain, my left hand and
> right hand and all of the cells in my body. Now, in theory I can
> engineer some cells to grow me a hump on my back of unpatterned
> neurons and then send in some nanobots to read my existing synaptic
> connections and rewire the synapses my hump neurons to be virtually
> identical. The nanobots also route a copy of my sensory inputs
> to my "Brhump" but make the motor outputs no-ops (after all its
> a backup copy). Are you suggesting that I don't "own" my hump?
I've got to hand it to you, Robert, here you seem to have posed about as hard
a problem as my idea could face. First, I think it's necessary to be more
clear than I previously was about just what we mean when we talk about
"ownership" in connection with minds, whether our own or not. While I do
endorse Max's notion of "self-ownership", I think it's necessary to
distinguish the nature of that ownership from the relationship we have with
matter that is not a substrate for mind. The notion of "self-ownership" is a
reflexive mapping of the notion of property and serves a useful purpose in
clarifying the ethical and legal outlines of appropriate boundaries on
individual autonomy in society. The logical conundrum posed by your example
relates to another "folding" of that reflexivity.
At one level, the resolution of your problem might be simple: "If I rightly
can be said to 'own my self', and my self is my brain, then I own all parts
of my brain, even ones external to my cranium." Is your "Brhump" part of
your brain? Or is it another brain? More importantly, is it another "self"?
If so, then the principle of self-ownership won't help you: The Brhump "owns
itself" and has a right to autonomy.
You say that the pattern of neurons in your "Brhump" are "virtually
identical" to the ones in your cranium and the sensory inputs appear to BE
identical as you've posed the hypothetical problem. If the pattern of
neurons were PRECISELY identical, then it appears you've succeeded in making
an exact copy of your mind that is updated in real time to maintain that
identity. If so, then it seems you can escape any moral issue: Your "Brhump"
has no more moral status than your reflection in a mirror.
But once any significant diversion of the two instances of mind develops, you
do have a moral issue, at least in my opinion. Look at it this way: If your
brain were removed from your body and somehow sewn onto someone else's back,
how would you want it to be treated? What if it was your identical twin's
back? We treat Siamese twins as two different (but closely related)
individuals for moral purposes, and for a good reason; because they are two
distinguishable moral SUBJECTS.
> > In the case of your children, they are built from the "joint
> > intellectual property" of you and your mate and all of the
> > atoms making up their bodies started off as your property, but still we
> don't
> > say you "own" your children. Why? Because they are complex minds.
>
> So is my "Brhump". But unlike the example of children, I created
> it entirely and it is a parasite on my body.
I don't think this gets you to the point where you are excused from any moral
obligation to this new entity you've created: If it is a distinct mind, you
owe it the duties appropriate to 1) it's current level of mental activity and
2) it's potential levels of mental activity.
> > Now, you may have special rights and duties with regard to minds that
> > have such a special relationship to you, but "ownership" isn't among
them.
>
> Well, since it is a copy, in theory cutting it off and burning it falls
> into the "book burning" moral category. Since the brain can't feel pain
> there are no "ugly" moral claims that this would be the equivalent of
> torturing or murdering someone.
If it is a precise copy that has never diverged in its "trajectory of
identity" from the original, then you're right. In fact, it has no more
moral quality than smashing a mirror. But to the extent that it has
developed a mental identity distinct from your own, how can you distinguish
your "disposal" of this mind to what some other mind might do to you?
> > Morally, mind is a special sort of "thing". For one thing, it is a
> > process.
> > Thus, one might be said to have something more akin to "ownership" in
the
> > stored pattern of one's backup copies, but once they are "run" or
"running",
> > they would take on more of the quality of moral subjects as well as
moral
> > objects. Once a system is capable of being a moral subject, "ownership"
> > ceases to be the right way to consider it as a moral object.
>
> Clearly in the example above, the mind is running (after all what good
> is a backup copy if it isn't up-to-date)? Now, as an interesting aside
> there is the question of whether "untouched" "Brhumps" (with exactly
> the same inputs) will diverge from your brain and need to be edited
> back into complete equivalence by the nanobots? Whether or not
> that is necessary, the subject was created by me, for my use and
> literally "is" me (even though it is a second instantiation).
You seem to acknowledge here the moral distinction between an exact copy and
one that has diverged from you. I think this is because unless we give a
special, fundamental moral status to independent minds, we can make no moral
judgments at all, because we ARE minds.
> Now, you might say that if I give my 2nd instantiation separate
> senses and let it evolve a bit that it now becomes its own unique
> moral subject. But what if it knew going into the game (since its
> a copy of me) that I have a policy of incinerating "Brhumps" at
> the end of every month? [Not dissimilar from the knowledge in
> our current world that we don't live to 150 -- its just the
> way it is.]
>
> This comes up a little in Permutation City and to a much greater
> degree in the Saga of The Cuckoo (where people go climbing into
> tachyon teleportation chambers, knowing that the duplicate on
> the receiving end faces almost certain death). Its kind of a
> going off to war mentality, knowing that you might not come back,
> but you do it anyway if you feel the need is great enough.
Well, this is a different question from "What can I do to minds I create".
If someone freely chooses to sacrifice themselves for the good of another . .
. well, that is their choice and no one is harmed but the one doing the
choosing. It's hard to see how one could condemn such an action. I happen
to be deeply dubious about positions that some people take in discussions
like this that assume that a copy of me will be willing to sacrifice itself
for me. Some years ago we went down this road at great length in discussion
of "the copy problem". If I step out of a "copy machine" and encounter the
original of me asking me to stick a hot poker in my eye so that "I" can find
out how I'd react to such a situation, I doubt seriously whether I'd just say
"Oh, OK!" (Use Rick Moranis' voice from "Ghost Busters" there.)
Whether I might be willing to engage in more dangerous behavior than I
otherwise would be if I knew that a very recent copy could be reanimated
should I be destroyed in the process somehow seems PSYCHOLOGICALLY like a
different question. The distinction is whether the "I" making the choice
stands to gain some benefit. I doubt that I would engage in atmospheric
re-entry surfboarding if I didn't have a recent copy somewhere. If I did
have such a copy, I might just do it, because it would be so COOL.
* * * END OF PART ONE * * *
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:23 MST