From: Nick Hay (nickjhay@hotmail.com)
Date: Fri Aug 15 2003 - 18:41:35 MDT
king-yin yan wrote:
> Friendliness does not specify the details of morality, but the FAI
> is supposed to deal with them. Therefore Friendliness actually
> indirectly determines morality. Otherwise the FAI will not interfere
> with moral issues.
Right, the FAI does have to deal with morality but the morality it develops is
not sensitive to what the programmers think - this removes Friendliness from
the realms of human politics and into that of engineering. It has to evaluate
things itself, especially through the transition to superintelligence.
Friendly AI is the art of giving an AI the ability to think morally like us.
So it does indirectly determine morality - a Friendly AI will have a humane
morality (the class of humane moralities is grounded in the kind of
moralities humans would choose to have if they had the intelligence, wisdom,
and ability to do properly choose). But it doesn't determine morality in the
way humans disagree on eg. select moral status of abortion and suicide.
> Come to think of it, that seems to be a better
> AI -- as a tool to solve cognitive problems, but no more than that.
All problems are moral issues, things are done as means to an end. One can
either represent this explictly in the AI, so it knows the reason why you're
asking it to, say, determine the logical status of the Riemann Hypothesis, or
not. The former is a mind, the latter a tool.
A tool can only be as moral as it's wielder, and often less so as it doesn't
know what you *really* mean and so it often doesn't do what you intended it
to. You only transfer the tail end of your justification so it can't correct
many of your mistakes. For instance, a car is a tool. You transfer the tail
end of the subgoal "get to X without crashing" which includes "drive
forwards". Introduce wall, car drives forwards into wall as ordered, you die.
This is the behaviour of a tool - it cannot correct most mistakes on the hands
of it's user. As another example, a gun is a tool. In the hands of a murderer
it kills. This is the behaviour of a tool - it does what it's told, it's only
as moral as its user. An AI that solely solves cognitive problems is a tool.
A mind can be more or less moral than it's creator, there is no upper bound.
This is an additional risk in the hands of those who don't know what they're
doing, a benefit in the hands of those that do. Friendly AI describes the
structure and content needed for an AI to surpass the morality of present day
humanity, especially as it gains transhuman intelligence. We don't want the
AI tied to our mistakes, especially the ones we can't see yet (it was only a
few hundred years ago that slavery was seen as moral - what moral
imprefections still remain?). Solutions to problems we don't even know exist.
- Nick
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:42 MDT