Re: Objective morality

From: Delvieron@aol.com
Date: Fri Nov 26 1999 - 08:31:03 MST


In a message dated 99-11-26 01:34:49 EST, you write:

<< Suppose the existence of objective morality is Turing unprovable, that
means
 it exists so you'll never find a counterexample to show it doesn't but it
also
 means you'll never find a proof (a demonstration in a finite number of steps)
 to show that it does. A moralist who designs a AI and gives the investigation
 of this problem priority over everything else will send the machine into a
infinite loop.
 To make maters worse, you may not even be able to prove it's futile, that
the issue
 is either false or true but unprovable, so I don't think it would be wise to
hardwire
 a AI to keep working on any problem until an answer is found.
 
    John K Clark >>

Would it be wise to hardwire an AI to keep working on the problem until an
answer is found? Depends how versatile the AI is. I could see giving the AI
such a difficult problem to solve as being a great incentive for it to
survive, grow and learn. An intelligent AI would probably figure out
early on that the search for the answer might take a long time, and that it
might require information it doesn't currently possess. So rather than going
around and around in circles until it runs down, an AI on such a search might
take the long view, realizing that it needs to ensure its longevity (so as to
have time enough to find the answer), also realizing that it may need more
skills and knowledge than it possesses at the time (feuling curiosity and the
desire to improve oneself), and further, it might determine that there is a
need to actually try different models of morality, and apply its own working
model of morality to give it a reference on which to reach for the "absolute"
morality (and thus might become a moral being itself).

Give a seed AI this goal, and it might actually be beneficial.

Glen Finney



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:51 MST