From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Jan 12 1999 - 19:59:38 MST
hal@rain.org wrote:
>
> I don't agree. The real question is whether there is any meaning to the
> term "objective morality". I don't know a definition which avoids
> circularity.
How about "the set of choices made by an SI"? That's what
Singularitarianism is all about; making more intelligent decisions
through a proxy. The idea is that either they'll all make the same
choice, or that at least *some* choices will be excluded as being
objectively wrong.
Scott Badger wrote:
>
> Of course distinctions exist between choices and consequences.
> What does that have to do with the existence of some immutable,
> objective morality.
Sigh... the trouble with discussing this subject is the verbal
contortions I have to go through to maintain correspondence between the
actual logic and our cognitive intuitions. (That's casting aspersions
on our cognition, not the theory. We're nuts; the logic is perfectly straightforward.)
What I meant is "distinctions useful for making the choice". That is,
distinctions which render some choices objectively better than others.
If you claim that this only applies within a pre-existing system, then I
just ask whether there are distinctions useful for choosing between
systems. Is there a sense in which blowing up a K-Mart is actually
_wrong_, not just a matter of taste? I don't know. Not knowing, it
seems to me that the results of assuming an objective result have a
higher priority than any results deriving from the assumption that it's
all a matter of taste.
I'm not interested in subjective distinctions. "Subjective" means
"evolution is pulling my strings." "Subjective" means "I don't want to
justify my assumptions." "Subjective morality" is as silly as
"subjective economics" or "subjective physics". I want rational
justifications. If you assume anything, I want a rational justification
for that. I want a chain of logic leading right back to a blank slate.
And now that I have it, I'm not settling for anything less.
What is the concrete difference between Externalism and utilitarianism?
In utilitarianism, the morality of happiness is assumed. In
Externalism, it has to be rationally justified. Why, exactly, does this
make Externalism *less* rational? Occam's Razor!
I think we're getting lost in the Great Maze of Words. Let's look at
what you need to do to program "objective morality" vs. "utilitarianism"
into an AI. In the second case, you need an initial, unjustified goal
labeled "happiness" with positive value. Using Externalism, you can
start from a blank state - all goals at zero value.
> Agreed. I want to be free of the psychological dictates of evolutionary
> processes as well as the more artificial, religious proposals of an
> immutable morality. We are process, and any moral system will
> ultimately evolve to reflect changes in that process.
An immutable morality is no more religious than an immutable reality.
When science impinges on territory formerly held by religion, whether it
be the nature of reality or cosmology or even making choices, it makes
the issues scientific ones, it doesn't turn the science into mysticism.
To borrow Greg Egan.
> >I don't necessarily _believe_ in objective morality, but my best course
> >is to _act_ as if objective morality.
>
> How did you determine that is your best course? What exactly is wrong
> with a rational, functional, utilitarian morality?
I beg your pardon? Externalism is as rational, functional, and
utilitarian as you can get and still be a human being. Externalism is
what you get when you take utilitarianism and prune the unnecessary
assumptions. We're talking about a philosophy designed for AIs!
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:02:48 MST