Re: Drawing the Circle of Sentient Privilege (was RE: What's Important to Discuss)

From: Jef Allbright (jef@jefallbright.net)
Date: Tue Nov 19 2002 - 23:57:44 MST


Brett Paatsch wrote:
> Dostoyevski said "if God is dead than everything is permitted".
>
> Bertrand Russell was apparently quite frustrated - (History of Western
> Philosophy) that he could find no greater basis for saying what the
> Nazi's did was "wrong" than that he personally did not like it. As I
> understood he was arguing that "values" were a matter of taste. I
> think he would have liked to have been able to show that their
> actions had arisen as a result of logical error, but their value
> system, however obnoxious he found it was not apparently illogical
> per se. So far as I am aware no one has ever succeeded in finding a
> _logical_ basis for arguing that something is inherently right or
> inherently wrong.
>
> To try and ground ethics on rationality alone may be akin to
> searching for the perpetual motion machine, but maybe a more
> "universal" ethical system can be grounded on rationality plus some
> fundemental human traits like sociability. I don't know whether this
> later goal is also impossible or just extremely difficult.

The search for a rational basis for morals is just one more example of where
a frustrating conundrum at one level of context is a non-issue at a higher
level. Moral questions are always set in the context of human values. At a
higher level, the universe simply doesn't care. As long as a moral argument
meets the needs of the current values of the society judging it, it is
considered correct. It doesn't even need to be logically consistent.
Later, as society's values change, the old moral "absolutes" will be
replaced with newer moral "absolutes".

Attempts in the past to create a scientific, rational moral code were based
on observations of nature, such as "survival of the fittest", which in a
sense may be the ultimate natural law, no matter what rules we may make.
However this conflicts with (ironically) our evolutionary programming to be
compassionate. Other attempts such as "the greatest good for the greatest
number" also fall apart because the the determination of what is "good"
comes down to human values. Another idea is to always rule in favor of the
more "evolved" position. This was probably the most interesting idea I
gained from reading _Lila_, Robert Persig's sequel to _Zen and the Art of
Motorcycle Maintenance_, however this has the result of putting the needs of
organized groups ahead of the needs of individuals, and it's easy to come up
with counterexamples showing that this can be very dangerous in practice.
Another scientificly based moral guide may be to always rule in favor of
maximum extropy, but you can easily, and fataly find yourself suffering in a
local minimum along the way to this goal.

The best we can do is a short term approximation of "good", in accord with
current local social norms, and nature will take care of the rest in it's
inexorable extropic way.

As an aside, Lee's Level 7 definition of identity can be considered morally
superior to his preceding levels, since it enhances and broadens our
potential, but this would be naturally superceded by an even broader
definition (Level 8?) encompassing multiple shared identities with
motivations both individual and common; and this would in turn be
superceeded (Level 9?)by a more transcendant consciousness pursuing goals
but lacking human evolutionary motivations.

Hmmm, total absence of "weakness of the flesh". Would that be the ultimate
in morality, or would most of current society say it was lacking in
humanity, therefore evil?

It all comes down to a question of context.

- Jef



This archive was generated by hypermail 2.1.5 : Wed Jan 15 2003 - 17:58:15 MST