Re: Eugene's nuclear threat

From: Samantha Atkins (samantha@objectent.com)
Date: Thu Oct 05 2000 - 04:27:36 MDT


Robin Hanson wrote:
>
> On 10/2/2000, - samantha wrote:
> > > From a sufficiently removed
> > > perspective, replacing the human race with an intelligence vastly more
> > > aware, more perceptive, more intelligent and more conscious may not be
> > > entirely evil. I don't say it should happen, but it is something to
> > > consider in evaluating the morality of this outcome.
> >
> >You can only evaluate a morality within a framework allowing valuing.
> >What framework allows you to step completely outside of humanity and
> >value this as a non-evil possibility?
>
> I thought Hal did a good job of describing such a framework. Actually,
> any framework which values things in terms other than whether they have
> human DNA allows for the possibility of preferring AIs to humans.

That is a mis-characterization of my question and false from the point
of view of what types of beings we are. Why should humans prefer AIs to
humans (assuming for a second it is an either/or, which I don't
believe). What benefit is their for human beings (who/what we are) in
this? Are we stepping into some place or value system where we are not
and trying to value our non-existence from this non-place?

> The
> only way to avoid this issue is to stack the deck and declare that only
> humans count.
>

The only way to avoid what issue? Since we are human beings human
beings count quite centrally in our deliberations and must. Do you
agree?

- samantha



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:25 MST