Billy Brown, <bbrown@conemsco.com>, writes:
> Suppose my parents are religious, and they feel that anyone who is not a
> member of their faith will suffer eternal damnation. Consequently, at an
> early age they have be fitted with neural implants that will ensure I
> fervently believe in their faith, that I will never violate its tenets, and
> that I am incapable of ever changing this belief system.
One answer is that, as long as the resulting person is happy with his lot, anything is moral. You can create intelligences that are stupid or smart, flexible or constrained, but as long as they are happy it is OK.
The opposite extreme would suggest that only intelligences with maximal extropy should be created: flexible, intelligent, creative, curious minds. Such people would have a wide range of possible behaviors. They would perhaps face greater dangers, but their triumphs would be all the more meaningful. Doing anything less when creating a new consciousness would be wrong, in this view.
I am inclined to think that this latter position is too strong. There would seem to be valid circumstances where creating more constrained intelligences would be useful. There may be tasks which require human-level intelligence but which are relatively boring. Someone has to take out the garbage. Given an entity which is going to have limitations on its options, it might be kinder to make it satisfied with its life.
Generalizing the case Billy describes, if the universe is a dangerous place and there are contagious memes which would lead to destruction, you might be able to justify building in immunity to such memes. This limits the person's flexibility, but it is a limitation intended ultimately to increase his options by keeping him safe.
Hal