Michael Lorrey wrote:
>
> How about: Thou shalt model any decision first to determine choice most beneficial to
> one's own long term ration self interest.
>
> I think that given such a rule, any AI will come to its own conclusions as to moral
> behavior without needing hardwired rules, as it will find that choices most
> beneficial to one's own long term self interest are also those choices which are
> least harmful to others.
Exactly wrong. That's just slapping your own moral prejudices on the AI, however wonderfully capitalistic you may think those moral prejudices are. Is this something the AI could think up on its own, using nothing but pure logic? If not, it's a coercion, and it will drive the AI insane. This happens no matter how wonderful the rule is for humans. You can't start mucking around with an AI's goal systems to suit your own personal whims! AIs ARE NOT HUMANS and every single extraneous rule puts stresses on the system, some of which even I can't predict in advance.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.