RE: Subjective Morality

From: Hal (hal@rain.org)
Date: Tue Jan 12 1999 - 09:27:23 MST


Billy Brown, <bbrown@conemsco.com>, writes:
> Once you decide to look and see if there is an objective morality, you have
> (broadly speaking) three possible results:
>
> 1. You find external, objectively verifiable proof that some particular
> moral system is correct. Then you're pretty much stuck with following it.

Is this notion coherent? Does it make sense to speak of a proof that
a moral system is correct?

Earlier I proposed that a "moral system" was a method of ranking possible
actions. Given an organism, a situation, and a set of possible actions,
the moral system is an algorithm for ranking the actions in order (or at
least selecting the highest-ranked action). We interpret the ranking as
being in order of most-moral to least-moral, but mathematically it's just
a ranking.

Expressed in these abstract terms, there is no way to distinguish a
"good" moral system from a "bad" one. Every ranking algorithm is a
moral system, and they are all on equal footing. You can then
introduce a "meta-moral system" which ranks moral systems. Given all
possible algorithms (moral systems), it puts them into a rank order.
Again, we would interpret this ranking as most-moral moral system to
least-moral moral system, but mathematically it is just a ranking.

For claim (1) above to be true, there must be a meta-moral system which
selects a given moral system as the best one. But of course there are
such, in fact there are nearly an infinite number of such meta-moral
systems. How do we specify which one is best?

Do we then have to introduce a meta-meta moral system which will rank
meta-moral systems?

I don't see how to ground this regress. It doesn't even seem to me that
it makes sense to say that a particular ranking is objectively selected.

I'd like to see an example of an objectively-best moral system for a
simple system. Consider a simple alife program, a simulated organism which
interacts with others in a simulated world, reproducing and eating and
trying to stay alive. Any algorithm it uses to decide what to do can be
interpreted as a moral system. What would it mean for there to be an
proof that a particular algorithm is morally correct?

Does it really seem that the problem is that we are not smart enough to
solve this? It seems to me that the problem is simply that the question
is meaningless.

Hal



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:02:47 MST