From: Eric Watt Forste (arkuat@pobox.com)
Date: Mon Feb 10 1997 - 12:33:32 MST
Eliezer Yudkowsky writes:
>Well? What is it? I'm waiting? I realize that a human being is more
>complex than a 386, but what difference does it make? A 386 can
>maximize about as well as a human, even if it has less resources to do
>it with.
Eliezer, you're being simply obtuse. There are several different
differences. A 386 cannot make love to you. A 386 cannot discuss
epistemology with you. A 386 can't compose anything half as good
as a Mozart piece, at least not yet. A 386 cannot send chills down
my spine the way Tori Amos can. Listing obvious observations about
the differences between human beings and toasters grows tiresome...
surely you are capable of observing on your own. Of course, you
will now come back at me and claim that these differences make no
difference, and you will be wrong. These differences perhaps make
no difference *to you*, but as I said, nihilism is not a philosophy,
nihilism is an emotional disorder. These things make a difference
to *me*.
>Again. Why does something acting to maximize X make it valuable, and
>why is X valuable if a human acts to maximize it but not if a 386 acts
>to maximize it?
When did I claim that something is not valuable if a 386 acts to
maximize it? You're putting words in my mouth. But there are certain
things (such as truth, beauty, justice, love, etc.) that human beings
often act towards maximizing, that you cannot (yet) program a simple von
neumann machine to maximize. When you show me a "simple von neumann
machine" programmed so as to maximize these things, I'll be happy to
recognize it as a person. Or perhaps not. Perhaps we'll have new
characteristic poorly-understood abstractions around which our society is
built by then. And again, you're divorcing values from valuers. These
things are valuable to *me*. I cannot make them valuable to you by means
of argument. That's what the fact-value dichotomy is about. If you want
to use the knife of reason to destroy your own values, that's your own
business.
>I've stated that there's no ethical difference 'tween a human maximizing
>something and a thermostat following the same strategy. Respond, remain
>silent, or claim I'm "obviously wrong", refuse to back it up, and change
>your name to Searle.
How about I just avoid your company in the future? That's my usual
response to people for whom human life is cheap. I happen to think it is
a rational response.
>> Assertions prove nothing, Eliezer. How would you like to go about
>> demonstrating to us the arguments behind your conviction that value
>> is independent of valuers? If this is your axiom, then you are
>> simply assuming what it is that you are setting out to prove: that
>> life has no value.
>
>I'm setting out to prove no such thing. Nor does it follow from
>observer-independent value that life is valueless. My claim is that
>observer-dependent value in the direct, pleasure-center sense requires a
>definition of "value" which is not intuitively compatible with what
>*anyone* means, including all participants in this dicussion, by asking
>whether life is worth living. Thus making observer-dependent values no
>values at all.
Who said anything about "the direct, pleasure-center sense"? This
is *not* what I'm talking about. Sometimes unpleasant things are
valuable, and sometimes pleasant things have negative utility.
Life's being worth living is not a *given*, it is not an answer to
a question which can be proved or demonstrated. Life's being worth
living is a consequence of action. There's nothing sillier than
someone mooning around asking whether or not life is worth living,
when they could be going out and *making* their life worth living.
You're doing the usual silly thing that people who try to *prove*
free will instead of *choosing* free will do.
>> Perhaps it's the extra fuss that makes the difference? A simple
>> computer can declare "I will resist if you turn me off." You turn
>> it off and nothing happens. I, on the other hand, can declare "I
>> will resist if you try to take away my raincoat." and you will find
>> it a considerably harder task to take away my raincoat than to turn
>> off the simple computer.
>
>Now you're citing something with a vague bit of justification: You
>state that *meaningful* maximization requires *resistance* to
>de-maximizing forces.
I made no such statement. I was a making a suggestion, and I didn't say
anything about "requiring"... I was merely talking about "making a
difference". You are oversimplifying my suggestion and cramming it into
some weird crystal-edged intellectual pigeonhole of yours again.
>Someday, computer security systems may put up quite a fight if you try
>to turn them off over the 'Net. This is not the place to retype GURPS
>Cyberpunk, but possible actions range from severing connections to
>repairing damaged programs to rewriting entrance architectures to impose
>new password-challenges. I could write, right now, an application that
>resisted death in a dozen ways, from trying to copy itself elsewhere to
>disabling the commonly used off-switch. If someone can't resist, do
>they lose their right to live? This is the "fetuses/computers have no
>rights because they are dependent on us" absurdity all over again. If I
>make you dependent on me, even for blood transfusions, do you lose your
>rights as a sentient being?
Yes, the standard distinction between persons and non-persons is
breaking down, and we're going to have come up with new heuristics for
making that distinction. What else is new? But the fact that we will
soon no longer be able to make the glib equation "person = human" has no
impact on the question of whether or not life can be worth living.
>There are dozens of far-from-sentient classical AIs that will attempt to
>achieve goals using combinations of actions. From where I sit, I see a
>folder called "shrdlu" which contains a computer program that will
>attempt to stack up blocks and uses chained actions to that effect.
>Does this program have any ethical relevance? Within its world, do
>stacked blocks have meaning?
Not to me it doesn't. But please make up your own mind on this question.
I don't think there's any screaming need for consensus yet on this
particular issue. If some people want to treat present-day AI programs
as people, that doesn't bother me any. It's good practice for the
future, and not much weirder than the most radical animal-rights
activists.
>Don't give me that "sad that you can't tell the difference" copout.
>Tell me what the fundamental difference is, and why "maximizing X" is
>not sufficient to make X valuable, and how humans maximizing X includes
>the key ingredient.
Do your own research, buddy. I'm not trying to sell you anything, I'm
just trying to steer you away from suicide-talk, because I would be
pissed off at you if you killed yourself over some ridiculous
abstraction that you had set up in your brain in such a way that it had
a negative effect on your brain chemistry. These things do happen!
Abstractions make good tools and poor masters. The meaning of life is
either right at the tips of your nerves, in the experiences that you are
having right now, or it is not at all. You are scaring yourself with
ghosts. The talk you are talking is death-memes, information patterns
that get into people's heads and sometimes kill them. It's lemming talk
even worse than the most virulent Christian fundamentalist memes.
Perhaps you are just playing intellectual games, and you're not really
*feeling* the sadness that comes with such ridiculous beliefs as the
ones you are pretending to espouse. But I've felt it enough that I have
very little patience for these ideas, and if you don't like my lack of
patience, tough! Go tell it to Kurt Cobain.
>Name as many as you want, because I can PROVE that a giant
>hashtable-thermostat can maximize anything a computational mind can.
>I.e. REALLY BIIG (but less than 3^^^3) lookup table, duplicates inputs
>and outputs, no mind, but works as well. Again, what difference does it
>make what does the maximizing?
Okay, bring me a thermostat that maximizes beauty. I want it next
Thursday. Thanks.
>Why is tickling the pleasure centers significant? You're simply ducking
>the question. Evolution makes a lot of things *pleasurable*, this
>causes us to assign them value, but I want to know if things *really*
>have value that isn't just a trick of our genes! If our genes made us
>think that sadism was inherently valuable, would it be? You've named
>things that we think of as not merely pleasurable, but *holy* - but John
>K Clark would laugh at quite a few "holy" values.
Significant of what? What question is it, precisely, that I'm ducking
here? Besides, you don't even know what you mean by "the pleasure
centers". You are scaring yourself with a lot of voodoo talk, building
frightening theories about things that have not yet been scientifically
investigated with any good results. "The pleasure centers" could turn
out to have as much or as little meaning as phlogiston or caloric. If
you are sincerely interested in these precise questions, you should
take up empirical neuroscience research, instead of pretending to be
able to come up with answers by mere introspection like Descartes and
Leibniz. Answers to such questions come from laboratories, not from
mailing lists.
>Now, it is possible that pleasure has inherent value. David Pearce
>thinks so. He could even be right; subjective conscious pleasure could
>be inherently valuable. In which case the logical course would be
>wireheading on the greatest possible scale, using only the Power
>necessary to "turn on" the Universe and then devoting all those useless
>thinking circuits to pleasure-center emulation. And yet, some deep part
>of us says: "This isn't meaningful; this is a dead end."
This presumes that the phrase "inherent value" means anything
independent of a particular valuer, which it does not! Value is a
relationship between the valuer and the valued. And there are no
rules to tell you what it is. This is precisely what it means to
be free. You get to decide what is right and what is wrong: you
are free. Thou art God. If you mess things up for your cohorts,
they will mess things up for you, so you might want to choose to
refrain from force and fraud (I usually do).
You aren't going to get freedom from choice, no matter how much you
crave it. As the old Devo song goes, freedom of choice is what you've
got.
>I've been at peace with the world around me, while listening to Bach, if
>at few other times. It moves me. I admit it. But I still have the
>moral courage to defy my own sense of outrage and say, "So what?"
>Simply because I assign value to a thing does not necessarily make it
>valuable! Why is assigning value to Bach better than assigning value to
>cats? And if the latter is wrong, why not the first as well?
Such a declaration is a degradation and cheapening of yourself as a
valuer. It is a repudiation of your own freedom to create values. You
are free to act, to create, to enjoy, to love. If that's not enough for
you, perhaps you can find something more in a basement universe with
different laws of physics.
>Your intuitive definition of "value" is "All pleasurable activities that
>I have not categorized as 'dead ends'." My response: "So what?
>Justify!"
Not all valuable things are pleasurable, for starters. Perhaps you
are so confused because you have oversimplified here (hedonism is
always an oversimplification, although perhaps David Pearce did it
right this time). But I'm not espousing simple hedonism here. And I
don't have to justify myself to you, buddy. (Moral philosophy always
ends up in rudeness, it seems.)
>So it could be true that I'd be a lot different if brought up in Korea.
>But the evidence available to me suggests that innate ability levels do
>an awful lot to determine what philosophy you choose, perhaps to the
>point of simple genetic determinism.
Now you are begging the question that innate ability levels are
determined by your genes and not by your first-five-years environment,
which is precisely the claim that I was attacking!
-- Eric Watt Forste ++ arkuat@pobox.com ++ http://www.pobox.com/~arkuat/
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:09 MST