Re: Future Technologies of Death

From: Martin H. Pelet (tbm@cyrius.com)
Date: Thu Jan 01 1998 - 06:36:11 MST


> From: Anders Sandberg <asa@nada.kth.se>
> Date: Tue, 30 Dec 1997 22:11:45 +0100

> Yes, some kind of "responsibility Turing test" might be needed. Maybe
> it could consist of several scenarios the requester is confronted
> with, and its actions are studied, especially when asked to predict or
> extend the scenario.

While the Turing test is fairly safe from cheating, a responsibility
test carried out like a Turing test would be very susceptible to it.
You would not know whether the persons being tested tell you what they
believe or simply what you want to hear.

When AI systems will be available, you will of course have the tools
to read their whole minds directly, which would solve the problem above,
but this method would violate their rights.

Moreover, a responsibility test performed today would not give you any
certainty that the person would not turn bad in a year or so because of
certain influences.

Finally, if such a test came into existence, who would define what
would be ethically acceptable and who would assure that the rules of
the test would be adapted according to changes in the ethical standard?
Would it even be possible for the test not to lag behind the ethical
standard?

-- 
Martin H. Pelet <tbm@cyrius.com>


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:48:22 MST