John Clark wrote:
>Suppose the existence of objective morality is Turing unprovable, that means
>it exists so you'll never find a counterexample to show it doesn't but it
>also means you'll never find a proof (a demonstration in a finite number
of >steps) to show that it does. A moralist who designs a AI and gives the
>investigation of this problem priority over everything else will send the
>machine into a infinite loop. To make maters worse, you may not even be
able >to prove it's futile, that the issue is either false or true but
unprovable, >so I don't think it would be wise to hardwire a AI to keep
working on any >problem until an answer is found.
An alternate way of looking at it is how people try to instill objective morality into their progeny, essentially programming them. It is easy for a parent to say that "killing people is bad, helping people is good," though circumstances dictate that there are occasions when people need to be killed (in self defense etc.) and when others shouldn't be helped. No matter how complex the morality appears to be, there seem to be exceptions to the alleged "objectivity." The same, of course, is true of AI; if someone tried to program objective morality into an AI, the lack of flexibile rationality and reasoning based on situations would likely still resemble "just a computer" whether Turing proven or otherwise.
E. Shaun Russell Extropian, Musician, ExI Member e_shaun@uniserve.com <KINETICIZE *YOUR* POTENTIAL> ------------------------------------------------------- "The reason I'm involved with Extropy...is to end the carnage." -Robert Bradbury, Extro-4