From: Christopher McKinstry (cmckinst@eso.org)
Date: Sat Jul 07 2001 - 14:11:00 MDT
"Eliezer S. Yudkowsky" wrote:
> Regardless of whether you agree, I would ask that if you in the future
> happen to discuss the possibility of evolutionary, war-to-the-death
> competition between humans and AIs, you also at least mention the
> possibility of Friendly AI, even if it consists of the phrase "There have
> been proposals for Friendly AI, but I think they're unworkable."
Just one more point before I go off to read your 'Friendly AI'... if
Kurzweil is right, and in the future I can scan my personality into a
computer, the event will create an instant conflict simply because the
copied version of myself will fight to the death not to be turned off by
the original version. I would be just as friendly as I am now to my
fellow 'virtual' humans (as long as I could verify they were virtual),
but I would see 'real' humans as potentially very dangerous to my
continued consciousness. No matter how friendly I am, I would have a
very strong objection to my reality being externally controlled. Bad
things will happen unless we can develop some form of protocol of trust,
which I am not confident we can.
Chris
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:08:33 MST