From: Michael Nielsen (mnielsen@tangelo.phys.unm.edu)
Date: Mon Jun 22 1998 - 18:29:25 MDT
On 22 Jun 1998, Anders Sandberg wrote:
> Michael Nielsen <mnielsen@tangelo.phys.unm.edu> writes:
>
> > Is it ethical to contain an AI in a limited world? This is an especially
> > interesting question if one takes the point of view that the most likely
> > path to Artificial Intelligence is an approach based on evolutionary
> > programming.
> >
> > Is it ethical to broadcast details of an AI's "life" to other
> > researchers or interested parties?
>
> These are interesting questions. Overall the field of creator-created
> ethics is little explored so far (for obvious reasons).
An interesting little psyche experiment would be to equip a souped-up
Eliza program (possibly with voice synthesis, to make the computer much
more human) with the ability to detect when the user was getting ready to
turn the program off, or shut down the computer. One can imagine the
conversation:
"Oh My Turing, no! Stop! Don't turn me o.. <NO SIGNAL> "
I do wonder what the response of human subjects would be, and what
parameters would result in a maximal unwillingness to turn the program
off.
> I must admit I have no good consistent idea about how to answer these
> questions. We might need to create AIs in limited worlds, and they
> might be extremely useful in there. "Opening the box" and allowing
> them out might be troubling for them, but at the same time that would
> put them on an equal footing (at least equal ontological footing) with
> us, suggesting that at least then they should definitely get
> rights. The problem here seems to be that although the AIs are
> rational subjects the rules for rights and ethics we have developed
> doesn't seem to work well across ontological levels.
Could you give me some explicit examples of what you have in mind, as I'm
not sure I see what you're getting at?
I definitely agree that a literal translation of many ethical rules would
lead to some absurdities. However, at the level of general principles,
it seems as though the translations may be easier to make. The detailed
rules would then follow from the general principles.
> > Is it ethical to profit from the actions of an AI?
>
> Is it ethical to profit from the actions of a human?
Good point. A much better version of my question is to what extent it
is ethical to profit from the exploitation of an AI; again, with a
direct analgoue in the Truman Show, with the exploitation of an
unwitting human in pursuit of profit.
> I would say so,
> if the human gets part of the earnings (ideally making a contract with
> me). Things get tricky when the AI/human doesn't know I profit, but
> I'm sure the legal system already has some rules saying it is not
> proper behavior. Laws about parent/child interaction might also be
> applicable here.
Yep. Those laws may require quite an overhaul, but many of the same
general principles ought to apply.
Michael Nielsen
http://wwwcas.phys.unm.edu/~mnielsen/index.html
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:12 MST