From: Samantha Atkins (samantha@objectent.com)
Date: Sun Nov 24 2002 - 16:54:36 MST
Michael Roy Ames wrote:
>Dear Ben,
>
>You wrote:
>
>
>>On a slightly different note, I am still not sure what
>>you mean by "empirically verifiable ethical/moral
>>system." This almost strikes me as a nonsequitur.
>>
>>
>>
>
>Yeah. Heh. Its what everyone wants, but nobody has. Do we ground
>our morals in the views of one person? A whole bunch of people? In
>a measure of some situation? Or what? Currently morals seem to be
>grounded in concensus opinion, and as such, Friendly AI's cloud of
>pan-human characteristics would attempt to capture that.
>
>
>
It should be grounded in what is actually the best acheivable by the
sentients concerned. Not best in terms of something outside themselves
but in the context of themselves. Morality only exist in that context.
It is a tool for choosing between alternatives. Such choices can only
be made with a context. Speaking of Morality as separate from context
is pointless. It is not simply a matter of consensus.
>Sure. It's nothing fancy.
>
>First, one has to define the goal. Here are some examples of goals:
>
>To get to heaven.
>To be considered a good person by oneself and others.
>To perpetuate the human species.
>To increase the quantity and quality of sentience in this Galaxy.
>To increase in complexity of the universe.
>
>Second, one has to define the system that is intended to reach (or
>move towards) a goal. Here are some examples of systems:
>
>Christianity.
>Humanism.
>Buddhism.
>Complexitism.
>Friendly AI.
>
>
The system is irrelevant to formulate the funamental goal. It is means,
not end. It is subject to continual re-evaluation relative to the ends
chosen.
>Third, you compare how well actions take within the system move
>toward (or reach) the goal. If actions within the system do in fact
>achieve the goal, then the system is verified. Otherwise it is
>refuted.
>
>
You are mixing systems in arbitrarily it seems to me. It adds nothing
and actually detracts from the discussion.
>
>I ask myself "what is the most Right goal"? or "what goal encompasses
>all the other Right goals?".
>
Why do you ask such an utterly abstract question in the first place?
What good is in that?
>Then I ask myself "what has the
>universe been doing up to now? (that we know of)".
>
The universe hasn't been doing anyting in terms of conscious choice.
>And one answer
>appeared to be 'Increasing in complexity'. This seems to be the
>answer the Universe is giving us when we ask "what are you doing?".
>
This is highly confused as it addressed and inanimate and/or unconscious
or inaccesible-if-conscious aggregate as if it is conscious and
accessibly so and as if what it is "doing" is at all relevant to what is
the best basis for a moral system for us here and now. What for?
>This answer correlates well with many of the human moral systems that
>have grown and prospered over the centuries.
>
No, it doesn't.
>Even now we, as humans,
>are attempting to push the complexity of our environment and
>ourselves to ever greater heights.
>
This is not the primary goal or center of morality though. It is a
by-product. It cannot be made the primary goal meaningfully.
>This seems to be what the
>universe *does*, and as such, I suspect it is the Right thing to do.
>
That does not follow. Nor is there any reason to beleive there is any
absolute Right thing outside of relevant context. Simply taking the
biggest context possible and anthromorphisizing the hell out of it
hardly gives an absolute Right.
- samantha
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:41 MDT