From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Aug 04 1999 - 17:47:55 MDT
You're being far too anthropomorphic. It doesn't make a damn's worth of
difference whether or not AIs are raised permissively or harshly, they
can't possibly get a "taste for blood", and I'd be seriously surprised
to ever find two AIs with the same architecture and conflicting goals.
"Parenting" isn't going to help at all. All of these things rely on a
base of human emotions that isn't going to exist in an AI.
I suggest:
http://pobox.com/~sentience/AI_design.temp.html
The sections you want are "Interim Goal Systems" and "The Prime Directive".
This should give you some idea of what the inside of a first-stage AI's
mind is likely to actually be like.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:39 MST