From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Oct 25 1999 - 08:18:42 MDT
Having an AI with a separate module for positive or negative
reinforcement is unnecessarily complicating the goal system. We all
agree that's bad, I hope? Even for Coercionists, there isn't anything
you can do with an elaborate "pleasure module" that you can't do with
initial goals and some conscious heuristics for skill formation.
And don't tell me about "harder to bypass". This isn't a game of chess,
people. If the AI starts playing against you, you've obviously already lost.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:36 MST