From: Anders Sandberg (asa@nada.kth.se)
Date: Tue Jun 23 1998 - 05:43:17 MDT
Michael Nielsen <mnielsen@tangelo.phys.unm.edu> writes:
> An interesting little psyche experiment would be to equip a souped-up
> Eliza program (possibly with voice synthesis, to make the computer much
> more human) with the ability to detect when the user was getting ready to
> turn the program off, or shut down the computer. One can imagine the
> conversation:
>
> "Oh My Turing, no! Stop! Don't turn me o.. <NO SIGNAL> "
>
> I do wonder what the response of human subjects would be, and what
> parameters would result in a maximal unwillingness to turn the program
> off.
I think there was a short story in the anthology _The Mind's I_
(Dennet and Hofstadter eds.) where somebody claimed computers were
mindless and just machines, but felt compassion and pangs of
conscience when given a mallet to smash a small robot. It could be
that just a few simple behaviors (like trying escape damage, looking
vulnerable etc) are enough to make humans regard the system as worthy
compassion.
> > I must admit I have no good consistent idea about how to answer these
> > questions. We might need to create AIs in limited worlds, and they
> > might be extremely useful in there. "Opening the box" and allowing
> > them out might be troubling for them, but at the same time that would
> > put them on an equal footing (at least equal ontological footing) with
> > us, suggesting that at least then they should definitely get
> > rights. The problem here seems to be that although the AIs are
> > rational subjects the rules for rights and ethics we have developed
> > doesn't seem to work well across ontological levels.
>
> Could you give me some explicit examples of what you have in mind, as I'm
> not sure I see what you're getting at?
Are authors responsible for what happens to their fictive characters?
They certainly decide what will happen, but nobody has so far claimed
it is unethical to give a character a tragic ending (just imagine
being protagonist in an Iain Banks novel!). But what if the characters
are low-level AI programs with motivation systems running in a virtual
world generating story? What if they are of comparable complexity to
humans? David Brin has written an excellent short story about this
problem, "Stones of Significance" (I'm not sure it has been published
yet).
A classic example would otherwise be the relationship between a god
and humans. Is God pantheon ethically responsible for the state of the
world? What rights do humans have versus God, and vice versa? Or does
God by definition stand above any ethics, instead being the
fundamental ethical bedrock we should start from, as many
theologicians claim? To put it bluntly: if God decides that rape and
destruction of art is good, will it really be ethically good?
> I definitely agree that a literal translation of many ethical rules would
> lead to some absurdities. However, at the level of general principles,
> it seems as though the translations may be easier to make. The detailed
> rules would then follow from the general principles.
Quite likely. We need a set of principles that can deal with different
entities of very different capacity and motivation, which exist on
different ontological levels and might be created by each
other. Tricky, but might be doable.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:12 MST