Re: Absolute Right and Wrong (was RE: Drawing the Circle of Sentient Privilege

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Nov 22 2002 - 12:02:18 MST


Lee Corbin wrote:
> Eliezer writes
>
>>I don't think your epistemological work is done until you
>>explain the origins of the perceived cognitive difference.
>>Why is it that people seem to see "It is MORALLY WRONG
>>that x!" as a different statement than "I and most people
>>disapprove of x"?
>
> Because the latter statement implies only the truth, namely
> that the speaker's and most people's *values* are violated
> by x. By speaking of something as MORALLY WRONG an effort
> is made to speak in the objective mode, so that what is
> conveyed is a claim, backed by the judgment and authority
> of the speaker, that x has a universal failing, and that
> anyone ought to be able to see that.

Okay. Here's the difference from my perspective: I disapprove of
cauliflower, while murder is morally wrong. Given that in both cases it
is undesirable that X happen to me, what distinguishes the two? The
reason cauliflower is undesirable is that I don't happen to like the taste
of cauliflower; if I model a future in which my tastes have changed so
that I now like cauliflower, I model it as being desirable, in that
future, that I eat cauliflower. On the other hand, I model murder as
morally wrong completely irrespective of how I feel about it. If a future
Eliezer was somehow warped to like murder, it might be a physical fact
that warped-Eliezer would commit murder, but to me, here and now, that
doesn't matter; murder, in that visualized future, remains wrong. On the
other hand, if my future self develops a taste for cauliflower, I have no
problem sympathizing with that future self because I model the
undesirability of cauliflower as being strictly contingent on my cognitive
representation of a dislike for cauliflower. Murder has no such
dependency. Of course the dependency-free model is itself a cognitive
representation, but that shouldn't be confused with actively modeling a
dependency.

Or in simpler terms, when I say that X is MORALLY WRONG, I mean that X
seems to me to be wrong regardless of what I think about it, and the fact
that an alternate Eliezer might be warped to think X was right makes no
difference in that. Similarly, it seems to me that 2 + 2 = 4 whether I
believe that or not, and the idea of being hypnotized to believe 2 + 2 = 5
doesn't change that, nor does the fact that "2 + 2 = 4" is a cognitive
representation somewhere in my brain.

>>What is it that, for you, distinguishes that which you
>>disapprove of with a frown, and that which many people
>>including you band together to disapprove of with a gun?
>
> Good question (as are the above). That which we band
> together against and have laws against is behavior that
> has been found unworkable for societies by evolution.
> That is to say, societies that condone theft or murder
> are not fit societies beyond the very short run.
>
> At least, that is all that we *should* have laws against.

Heh. I wanted to ask about torturing simulated versions of Lee Corbin
running on privately owned computers, but as I recall, you have no problem
with that. My compliments; discovering this kind of unexpected
self-consistency implies a philosophy genuinely based on and extrapolated
from deep principles.

But I still think you're WRONG about that...

-- 
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence


This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:19 MST