> > Where does this conclusion come from? Simple: The Golden Rule. Minds
are
> > a
> > special class of moral object BECAUSE they are also moral subjects. In
> > other
> > words, we have to treat minds differently from other moral objects
because
> > they are like "us", i.e. there is a logical reflexivity in contemplating
a
> > mind as a moral object.
>
> But military commanders will [unhappily] order their troops into situations
> where they know some of those moral subjects *will* be killed. If I want
> to use my "Brhump" in an experiment in which I know (and it knows) that
> its eventually the cutting room floor for it, it seems justified if
> I can come up with a good reason for it. I'm facing a really difficult
> problem that simply *has* to be solved; 1 brain isn't enough for it;
> create 2 brains have them work on seperate paths; select the brain
> that comes up with the best solution and kill the other brain).
> Thats what nature does and most of us don't don't see nature as
> being "morally" wrong.
First, the ethics of war raise very acute moral issues. People make the choice to sacrifice themselves for deeply-held values sometimes, it is true. They subject themselves to a command structure in which one person is given the authority to make such decisions because of the moral extremity of warlike situations (or out of simple stupidity). But such matters are not the same as destroying an independent mind without its consent.
In your example of copying your brain to multitrack on solution to some problem, why do you KILL the brains that fail to come up with the right answer? That seems like, well, overkill.
> > > Q2: Do you have a right to "edit" the backup copies?
> [snip]
>
> > On the other hand, it is not only acceptable, but good to do our best
> > to act to "program" or "edit" the child's mind to make it a BETTER mind.
> > Thus, the proposed morality of mind would find that some "editing" of
> > one's own backup copies would be good, and some bad.
>
> It seems like editing (or even selecting brains), if done from the
> perspective of self-improvement would be morally acceptable if the
> information gained is more than the information lost. Destroying
> a brain that is mostly a copy that fails the "best solution" test
> probably isn't much of a loss.
Your use of the term "self-improvement" begs the question of what a "self" is. How can you say that complete, autonomous copies of your brain are not "selves"?
> > > Q3: If you "edit" the backup copies when they are "inactive"
> > > (so they feel no pain) and activate them are they new individuals
> > > with their own free will or are they your "property" (i.e.
slaves)?
> >
> > Without a doubt, they cannot be your slaves, whether you edit them or
not.
>
> > See the response to Q1 above.
>
> But if I went into it *knowing* that the over-riding purpose of the
> exercise was to continually and rapidly evolve a better mind, I've
> implicitly signed the waiver (with myself) that part of me is
> destined for dog food.
Again, this is the issue of moral autonomy and consent. If you consent to such a thing - especially if it is done before and after the copying is done, I can't question the moral right to determine the circumstances of one's own destruction.
> > > Q4: & Greg's Extropian Exthics and the Extrosattva essay URL [snip]
> >
> > ... which is basically an extension of the Golden Rule into the arena of
> > unequal minds with the potential of augmentation, i.e. a godlike being
> > should treat lesser beings as much like itself morally as possible,
> > because those beings may one day themselves be godlike themselves.
>
> The key aspect is that you are discussing entities with the "potential
> for augmentation". There are two problems, a scale problem and an
> allowed reality problem.
>
> The scale problem has to do with the perceived degree of difficulty
> of the sub-SI becoming an SI. In theory we will soon have technologies
> that would let us provide ants (with perhaps slightly larger heads) human
> equivalent intelligence. Thus ants are "potentially augmentable". You
> don't see us treating ants with the "golden rule" generally.
I'm not so sure. What's important is not to treat all minds equally, but to treat them as we would want to be treated if we WERE that mind. An ant isn't capable of comprehending the notion of augmentation, but a human is. I suppose I should reformulate what I wrote above as "unequal minds capable of conceiving of their own augmentation".
> The allowed reality problem has to do with the fact that the SI "owns"
> the processors, presumably anything going on within them it is conscious
> of and completely controls. If the sub-SI doesn't have the interface
> hooks into the SI overlord then there is no way it can climb up the
> evolutionary ladder. Its like arguing that a neanderthal can
> build a nuclear missle or or an ant can turn itself into a human.
> These things aren't physically impossible, but the means to do
> what is required are unavailable to the subject. A sub-SI may
> be in a inescapable virtual reality in which it is a moral subject,
> but the overlord owns that virtual reality and to the SI, the sub is
> nothing but information organized to answer a question. When the
> question is answered, you are of no further use, so you get erased.
You're describing an "is"; I'm talking about an "ought". You could just as "accurately" say that a slave in the antebellum South had no legal ability to operate as an autonomous individual, so his owner could do whatever she wanted regarding the quality of his life.
I say "erasing" a human-level, active consciousness is wrong because a mind of such high order is a good in and of itself. I'm not claiming that this leads to ethical implications that aren't difficult to work out, but it surely leads to more acceptable results than saying a person's children are his "property" with which he can do whatever he pleases.
> > Even without considering the potential for moral-subject-status equality,
> > though, I believe the godlike SI is not completely without constraint
> > in how it should treat such lesser beings, no more than we are in how
> > we treat lesser animals.
>
> Well, historically we set about eliminating any animals that were a threat
> and we breed animals to be relatively tolerant of our mistreatments.
> The point would be that we *breed* animals and we *grow* plants and
> to a large degree do with them what we like unless a group of us gets
> together and convinces/passes a law/forces the others that doing that
> is morally wrong and they have to stop.
Again, I think you're mixing qualitatively different kinds of analysis; confusing description with prescription. The fact that something has been done in the past by a lot of people doesn't make it morally right. I know that moral nihilists claim that there can be nothing other than moral description, that's what "right" is simply what people observe and enforce as right. I'm doing something different here. While there may be some fundamentally arbitrary act in choosing one or a few principles as moral axioms, having done that, I don't accept social praxis as a determiner of MORALITY (but it may be very important in an ethical or legal sense).
In the examples to which you point above, the subjects of selective breeding or genetic engineering have few or none of the attributes of mind that I attempt to raise to the level of a moral axiom. Thus they present little or no moral challenge.
> SIs policing other SIs seems
> impossible with regard to the internal contents of what is going on
> within the SI. I can stop you from killing someone but I can't force
> you to stop thinking about killing someone.
I acknowledge that the idea I'm exploring could result in a completely new conception of "crime". It would become fundamentally important to distinguish between thoughts *I* am thinking and minds I am "hosting". I don't claim to have worked out all the implications of this, but I do note that we seem to be able to develop rules and laws for governing the interaction of people within society, even people of vastly different capacity and "power". We've even developed systems for governing the interaction of systems that govern such interactions. I don's see why the same couldn't be done with minds and minds-within-minds.
> > The morality of mind proposed here would dictate that the subprocess you
> > posit should be treated with the respect due a fairly complex mind, even
> > if
> > that complexity is far less than that of the SI.
>
> You haven't convinced me. If the scale differences are too large, then
> the sub-SI is dogmeat. Your example regarding the "easy life" for subjects
> that graudate from lab animal school works in cases where the levels are
> relatively similar (humans & chimps) but fails when the scale is larger
> (humans & nemetodes).
Quantitative scale alone isn't the only determinant: I would say that minds "above" a certain level are deserving of being treated as moral subjects. Finding the relevant qualitative "floor" (or more likely, "floors") is essentially the same problem of defining mind that epistemologists and AI researchers currently face.
> > Hal's put his finger on the fact that we're not treading entirely virgin
> > moral territory here. We already have to deal with moral questions
> > inherent
> > in interactions of unequal minds and in one person having some kind of
> > moral
> > "dominion" over another.
>
> I think the hair-splitting comes down to whether it the subject is
> externally independent (a child), or internally self-contained (an idea
> in my brain). From my frame of reference my killing my "Brhump" or
> an SI erasing a sub-SI has the same moral aspect as occurs when
> one personality of an individual with multiple personality disorder
> takes over the mind (perhaps permanently killing/erasing) one or
> more of the other personalities. I those situations, I don't think
> the psychologists get very wrapped up in the morality of killing
> off individual personalities. They judge their approach on what
> is feasible (can I integrate or must I eliminate?) and what is
> ultimately best for the survival and happiness of the "overlord".
You're right that there SEEMS to be a distinction between "child-minds" and "minds-within-minds". I don't doubt that different ethical and moral rules would apply, but I think one runs the risk of eroding principles of autonomy as you move along a spectrum of "independence". I would ask how you would articulate principles for distinguishing one from the other. I think I see a spectrum, with a natural human child at one end, your "Brhump" in the middle, and a completely self-contained mind-simulation at the other end of the spectrum. But I don't think any distinctions we can make along this spectrum solve the problem.
Let me throw you a science-fictional hypothetical. What if an SI sees your pathetic, fragile biological nature and decides to flash-upload you while you sleep into a section of it's own mental substrate. At first, your environment is perfectly simulated so that you do not perceive that you've been uploaded. Is your SI benefactor now an "overlord" as in your multiple-personality scenario? Is it free to now manipulate your virtual environment in any way it chooses, including to torment you? What if the torment is for a "good cause" (whatever that might be - say, to better understand the nature of human suffering)? What if the torment is just to satisfy some perverse sadistic pleasure of your SI overlord? What if the torment is just the result of random mutation of the virtual environment, and happens because the SI neglects to keep watch over you, being preoccupied with whatever it is that SIs care more about?
Seems like we've got exactly the situation depicted in the Judeo-Christian-Moslem scriptures. Looks like the book of Job. It stinks. If your answer is "That's just the way it is: He's god and you're not," then you've basically abandoned morality entirely. I would propose to you that that is exactly where your "scale" distinction gets you and why we better develop an "ethics of godhood".
I've made this comment obliquely before, but will be more explicit here. One of my intellectual heroes is Thomas Jefferson. One of the most moving episodes in his life was his attempt to outlaw slavery in Virginia as a step toward removing the institution from the American scene as the new republic was being formed. When that effort failed, he wrote his famous line that "I fear for my country when I think that god is just" - a succinct summary of what he wrote elsewhere that his generation, by not coming to terms with the problem of slavery, was bequeathing to a subsequent time a terrible reckoning.
I have a very similar conception of our current relationship to AI, the potential of SIs, radical human augmentation and the development of an "ethics of godhood". Failure to develop a rational and robust moral system that can accommodate the real potential of transhumanism in our own time may well condemn those of us alive 20 or 50 years hence to a tragedy not unlike the US Civil War, but of vastly greater proportions.
> [snip re: physical realities vs. virtual realities]
>
> >
> > This is only a difficult problem if we take a simplistic view of what
> > "reality" is. "Reality" for questions of morality and ethics IS mind,
so
> > "virtual reality" is in a very real sense more "real" than the
underlying
> > physical substrate. (Phew! That was hard to say . . .)
>
> But my mind may be killing off ideas or personalities or perhaps
> imagining many horrific things and people don't generally get
> upset about it. Virtual reality is real to the people in it
> but to the people outside of it, it doesn't really exist. So SIs
> presumably don't police the thoughts of (or the virtual realities
> destroyed by) other SIs.
I don't think this distinction can hold up. It presumes a far too concrete line between "real" reality and "virtual" reality. Consider your own present situation. Your mind is a process "running" on the wetware of your brain, supplied with information about the outside world through your "natural" biological senses. What happens to your distinction between "virtual" and "real" reality when we begin to augment those senses. The image you perceive by way of an electron microscope is a highly "virtual" one, almost a complete artifact of a series of "mappings". Much of your knowledge of the world outside of the few areas you have actually visited personally comes to you through many layers of filtration and technological intermediation.
At what point does a mind cross the line into "virtual reality". Is a "brain in a vat", hooked up to the outside "real" world through cameras and microphones, in "real" reality or "virtual" reality? What if the spectrum of the camera through which it perceives the world is shifted from the "natural" human range of visible light? What if the image presented to the brain is augmented with textual and iconic tags conveying information about what it is perceiving? What if some objects in the "real" world are represented to the brain solely as icons, rather than as lines that correspond to the physical shape of the object? What if all of the objects are? What if some objects are represented, and others not, or their apparent size or color is modified to convey some information? What if we then begin to replace the brain's neurons, one at a time, with non-biological elements?
It seems that this distinction between "real" and "virtual" won't hold up. I don't think we will cross the line in the above scenario of incremental "virtualization" into a realm in which we can begin to treat the mind involved as anything other than a moral subject.
> > [snip]
> > > ... (a) the relative "seniority" (?) of the actor as compared
> > > with the "person" being acted upon; or (b) the physical
> > > reality of the actions.
> >
> > I maintain that the latter factor is all but irrelevant to the moral
> > questions posed here
>
> I'm not convinced, I think the perception of the actions at the
> reality level is the critical basis for morality. Humans judge
> each others morality by what they do, not what they think. SIs
> judge other SIs by whether they destroy potential SI civilizations
> (maybe) but not whether they are erasing virtual information in
> their own memory banks.
There's no such thing as "virtual information" - bits is bits. If you've composed a piece of music, but never written it down outside your body or played it, is it any less a composition of music?
> > Thus, the more
> > "senior" the moral actor, the more justified we are in judging its
actions
> > (while at the same time those moral judgments become more difficult --
> > thus
> > we eventually encounter an almost christian moral inequality in which it
> > becomes nearly -- but not entirely -- impossible for "lesser", "created"
> > beings to morally judge the actions of a much "greater", "creator"
being).
> >
> I would have to agree here. If you are incapable of understanding the
> context for what is occuring in your reality, you may be incapable
> of judging the morality of *what* happens to that reality.
Again, I think you make the error of drawing a bright line where one isn't possible. You're assuming that a subprocess of the kind we are discussing has NO conception of the "context" of his existence. He may have an imperfect conception of it, or one even vastly less articulated and complete than his "host" but, above a certain (very hard to define) level, I would say that our subject is like Job - and deserves better than Yahweh have him.
Greg Burch <GBurch1@aol.com>----<gburch@lockeliddell.com> Attorney ::: Vice President, Extropy Institute ::: Wilderness Guide http://users.aol.com/gburch1 -or- http://members.aol.com/gburch1 "Civilization is protest against nature; progress requires us to take control of evolution." Thomas Huxley