>
> Rules can always be broken, just some are harder
> than others. The question
> is could we program an AI in such a way that it
> would never even try to
> behave in a certain way. I think the answer is yes,
> and that such an AI
> could still be intelligent; just very focused.
> However, in an AI designed to
> improve itself constantly, such restrictions would
> not work. Either 1) they
> wouldn't be strong enough to defeat the motivation
> to improve or 2) in
> certain situations they would limit the AI's ability
> to restructure itself.
> I agree that an AI designed to upgrade must in the
> end be free. But that
> doesn't mean we can't influence it.
>
2 points:
1.) Having certain restrictions might be a spur to the AI to find alternative harmelss solutions to a problem;
2.) The freedom of the AI to develop must be balanced with the possible effects on humans. Speaking selfishly, I'd plump for the latter party.
Mike