On Mon, Apr 30, 2001 at 08:46:58PM -0700, Samantha Atkins wrote:
>
> Will we apply the same rights and ethics to "artificial" sentients?
> Once the AI becomes self-aware will we stop rewiring its mind without
> its permission?
> If not, then why not? If there is an acceptable why not then does it
> also apply to wetware brains and beings once we understand well-enough
> how they work? Plenty of slippery slopes around here.
I think this is one of the BIG issues for any transhumanist ethics to
answer.
My own position would be that any system with sufficient level of
ethical subjectness would have the right to its life, including of
course the integrity of its own mind. Hence beyond a certain point we
must ask our AIs permission to modify them. What medium a being is
implemented in does not change its ethical status.
Note that being an ethical subject is a slight cop-out: I still need to
define it. For the moment let me just handwave and say it implies the
ability to rationally change behavior as a response to new information
(that is still rather loose, but defining the details is a whole thread
in itself). Basing rights on self-awareness, ability to experience pain,
consciousness or genetic makeup is too arbitrary, as we can imagine
systems lacking these but still behaving in ways that would be viewed as
ethical.
There are borderline cases when we deal with entities that are not fully
ethical subjects but quite close to it, such as children, deranged
people and half-built AIs. In this case partial rights may be necessary
as other entities are allowed influence over the first entity in order
to safeguard its personhood and development, but it is a whole
thead in itself how to best handle that.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:01 MDT