Billy Brown wrote:
> Nick Bostrom was arguing in favor of programming a fundamental moral system
> into the AI, and then turning it loose with complete free will. My argument
> is that this is very unreliable - the more complex a mind becomes, the more
> difficult it is to predict how its moral principles will translate into
> actions. Also, an intelligent entity will tend to modify its moral system
> over time, which means that it will not retain an arbitrary set of
> principles indefinitely.
Yes, however how the individual's principles evolve will have a direct impact on that individual's fitness to survive. I think though, oppositely from you. The more intelligent an AI is, the more rigorously logical it will be, and thus will actually be far more predictably reliable than a less intelligent AI. Free will has far more to do with how we rationalize the things we do. When we do something, we or others wonder why we did what we did. We rationalize an explaination for it, and thus program ourselves to respond similarly to associated situations. An AI which is more intelligent than we are will likely be far more logical in its rationalizations for its actions than we are.
>
> Now, I don't think that ongoing mental coercion is a good idea either, but
> that's a different line of argument. I would expect that you could devise
> an effective scheme for controlling any static mind, so long as it isn't too
> much smarter than you are. If you want to control something that is
> self-modifying you've got big problems - how do you design a control
> mechanism that will remain effective no matter what your creation evolves
> into?
Mike Lorrey