From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Nov 26 1999 - 08:57:36 MST
"E. Shaun Russell" wrote:
>
> The same, of course, is true of AI; if
> someone tried to program objective morality into an AI, the lack of
> flexibile rationality and reasoning based on situations would likely still
> resemble "just a computer" whether Turing proven or otherwise.
I think this goes to the root of the problem I (and others) are worrying
about. The logic I've worked out now is good, solid, and
self-swallowing, but I'm a human. For an AI, that logic implies a
particular low-level implementation, and that may not be good. For
characteristics like flexibility, common sense, the ability to avoid
doing dumb things like converting the entire Universe into a set of
regular polyhedra, spotting mistakes your designers didn't know about,
and adapting to new information as well as you'd have if your designers
had known about it to begin with, you need high-level characteristics
that are the result of a lower level that usually does not have a direct
semantic interpretation. The logic I've got now, if implemented
directly, would be too crystalline.
Since this is a true fact, an AI design based on this true fact should
not look at itself and go: "Yuck, my design has been screwed up by the
silly humans." Which is, let it be understood, the very first
requirement for stable self-improving AI.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:51 MST