From: Samantha Atkins (sjatkins@gmail.com)
Date: Thu Jun 24 2004 - 12:53:08 MDT
How in the world would you apply "reasonable" to "intuition"? Either
"morality" can possibly be defined for some group (like humans) with
enough objectivity to make its consideration useful or it is useless
to worry over a word we attach no real meaning to. I believe that
morality is objectively definable within the context of a particular
group of sentients and perhaps all sentients. But that is a minority
position here. The problem does not get better by assuming that a
poll of humanity or extrapolation of human/>human intent/volition. I
can agree that CV may give better guidance over some types of
planning/decisions than morality or what passes for morality for most
folks.
Conflating morality and CV is a mistake that I don't see Eliezer making.
BTW, your notion of SM people and their desires is very out of whack.
On Thu, 24 Jun 2004 20:30:22 +1200 (NZST), Marc Geddes
<marc_geddes@yahoo.co.nz> wrote:
>
> I wouldn't rule out the possibility of some sort of
> objective morality yet. Sure, you need to look at
> humans for 'calibration' of any reasonable morality
> that would speak to the wants and needs of humans but
> there doesn't mean that there is isn't some sort of
> objective standard for determining the morality of
> various human wants and needs.
>
Wants and needs are not something that have "morality" so speaking of
the morality of wants and needs is meaningless.
> What Eli seems to be worried about is the possibility
> of A.I programmers 'taking over the world'. But does
> the world really need anyone to 'run' it? Not
> according to the anarcho-capitalists and various other
> political systems that have been floated. Not that I'm
> advocating anarchy, I'm just pointing out that the
> whole idea of a singeton centralized agent might be
> misguided. In any event the way the world seems to
> work in the modern free market democracies is that
> people are assigned status roughly acccording to their
> talent and latent cognitive abilities. For instance
> childen have fewer rights than adults, brilliant
> adults who create good products end up with more
> economic power etc. Since FAI would have cognitive
> abilities far beyond an ordinary human, it's not clear
> why it would be wrong for the FAI to be given the most
> rights.
>
I don't believe that rights necessarily increase based on a
quantitative increase of some aspect of an entity whose rights are
being derived. Rights, like morality, can only be tied to reality
through considering the nature of the entities we are talking about.
Rights of the "unalienable" kind are those things required for the
well-functioning of the type of entity. It is not a matter of "more"
rights but of different rights for different types of entities. The
rights of entities will intersect on those rights deriving from more
or less intersecting aspects of their nature. It is possible that a
vastly greater intelligence would require by its nature more rights
than we do, but since rights are in the interaction of entities I do
not see that it is necessarily so.
The realm of morality is also the realm of inter-entity activity.
This is a smaller and more delimited sphere than that potentially
covered by CV.
-s
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT