From: Anders Sandberg (asa@nada.kth.se)
Date: Mon Jun 22 1998 - 02:46:46 MDT
Michael Nielsen <mnielsen@tangelo.phys.unm.edu> writes:
> Is it ethical to contain an AI in a limited world? This is an especially
> interesting question if one takes the point of view that the most likely
> path to Artificial Intelligence is an approach based on evolutionary
> programming.
>
> Is it ethical to broadcast details of an AI's "life" to other
> researchers or interested parties?
These are interesting questions. Overall the field of creator-created
ethics is little explored so far (for obvious reasons).
I must admit I have no good consistent idea about how to answer these
questions. We might need to create AIs in limited worlds, and they
might be extremely useful in there. "Opening the box" and allowing
them out might be troubling for them, but at the same time that would
put them on an equal footing (at least equal ontological footing) with
us, suggesting that at least then they should definitely get
rights. The problem here seems to be that although the AIs are
rational subjects the rules for rights and ethics we have developed
doesn't seem to work well across ontological levels.
> Is it ethical to profit from the actions of an AI?
Is it ethical to profit from the actions of a human? I would say so,
if the human gets part of the earnings (ideally making a contract with
me). Things get tricky when the AI/human doesn't know I profit, but
I'm sure the legal system already has some rules saying it is not
proper behavior. Laws about parent/child interaction might also be
applicable here.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:12 MST