Having an AI with a separate module for positive or negative reinforcement is unnecessarily complicating the goal system. We all agree that's bad, I hope? Even for Coercionists, there isn't anything you can do with an elaborate "pleasure module" that you can't do with initial goals and some conscious heuristics for skill formation.
And don't tell me about "harder to bypass". This isn't a game of chess,
people. If the AI starts playing against you, you've obviously already lost.
--
sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with PatternsVoting for Libertarians Heading for Singularity There Is A Better Way