How does one go about entering a simulation, minus the knowledge that
it's a simulation, without violating the constraints of informed
consent? Setting up your exoself to reprogram yourself "ab initio" and
then lock you in an unbreakable hell would obviously appear to violate
the informed consent standard, just as copying yourself and then
torturing the copy would.
Since I'm one of the people who may wind up actually deciding the seed
AI's default motivations (in the event there isn't a forced solution),
I've given some thought to the issue. I have two possible solutions:
1) Subjunctive informed consent. You can reprogram yourself to be
Buffy, forget your external self, and send yourself through the second
season of _Buffy the Vampire Slayer_ (this was NOT a happy season). The
invariant that establishes informed consent is that *if*, at any time,
you knew the truth, you would still choose to go back in. The question
is what "knowing the truth" and "you" consist of, and how you and Buffy blend.
2) Overlay informed consent. We may find it very difficult to conceive
of simultaneously "knowing" and "not knowing" something, but I can
imagine a cognitive architecture which would "protect" the core Buffy
processes while maintaining the awareness and processing of the external
self. Any given sequence of cognitive events, including emotional
bindings dependent on the belief that Sunnydale is real, would proceed
as if the knowledge that the world is a simulation did not exist, and
memories of that experience would be formed; however, a smooth blend
between that untouched core and the external awareness would be
maintained. Thus you could remain "you" while being someone else.
The task is not programming either of those solutions into the AI, of
course. Nobody can foresee all the problems, so specifying solutions is
inadequate as a method. The task is creating a mind such that the SI,
faced with the problem of informed consent, comes up with that sort of
creative and satisfying solution, or at least a reasonable solution as
opposed to a stupid or crystalline one. Since the human intuitions are
knowable and even explicable, this should be possible. Of course, I'm
hoping that all the moral questions have objective answers, which
introduces the additional task constraint of not doing anything that
would screw up the SI (or initial stages thereof) if the human
intuitions turn out to be wrong.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/beyond.html
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:03:59 MDT