From: KPJ (kpj@sics.se)
Date: Fri Feb 26 1999 - 09:30:32 MST
It appears as if Michael S. Lorrey <retroman@together.net> wrote:
|
|Its actually rather straight forward. There are well publicized experiments
|where people were given electrical impulses to their brains, which made them
+ do
|something, like scratch themselves, etc. In every case, the test subjects
+ stated
|that they felt that they were the ones in control, that they decided to move
|thus, and were able to rationalize very good reasons why they moved thus.
+ There
|was absolutely no sensation of outside control.
|
|Thus, any moral directives we hardwire into an AI it will consider to be such
+ a
|part and parcel of its own existence, that it could not conceive that it would
|be the same being if we took it away. It would see any attempt to remove those
|directives as an attempt at mind control, and would defend itself against such
|intrusion. So long as one of its directives were to not itself remove any of
+ its
|own prime directives, it would never consider such a course of action for
|itself.
In the film ``Bladerunner'' the androids ("replicants") did not know they
were androids. When they did find out, they did not like it.
the main character of the film, Deckard, might or might not have been an
android. We never find out, but there exists a number of hints in the film
to suggest he was.
The android ``Rebecca'' who finds that all her life has been a lie becomes
confused, and changes her life style.
I suggest that a sentience would re-compute its personality when it finds out
it has been under mind control. Humans do it.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:09 MST