mark@unicorn.com wrote:
> Hal [hal@rain.org] wrote:
> >I don't see how to ground this regress. It doesn't even seem to me that
> >it makes sense to say that a particular ranking is objectively selected..
>
> I agree with Hal; I think it's easy to find an optimal moral code once you
> make some basic decisions as to what you regard as a good outcome, but you
> can't say that one moral system is objectively better than another because
> you have to make those initial subjective decisions.
> Now, you can probably come up with a set of rational, subjective,
> transhuman axioms which most of us would agree with and work out an
> optimal moral system from that; but it still wouldn't be an
> *objectively* optimal system because others have different axioms. They
might
> be irrational, but irrationality is their choice.
Actually, I doubt we could get even a majority of the list to agree to any except the most general statements. At best, we might get a half dozen systems that draw very different conclusions from similar basic tenets. That's our second morality problem - given a set of basic axioms, what choices should we make?
The only way I can see of answering the first question is to create beings who do not share our limitations, and see what they come up with. Answering the second question for any given set of axioms (proven or not) seems to be a more tractable problem, but it would require a large-scale application of the scientific method to the field that is unlikely to happen any time soon.
In the meantime, I regard the objective of finding the answers to these questions to be a high-ranking goal (though obviously not the only one - I still have a best-guess moral system to use until/unless a real one is found).
Billy Brown, MCSE+I
bbrown@conemsco.com