From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Nov 25 1999 - 21:26:57 MST
hal@finney.org wrote:
>
> From the point of view of those of us who don't believe in Absolute
> Morality, Eliezer's program amounts to building an unachievable goal
> into the SIs, a highly dangerous proposition and one we might well oppose.
("Objective morality" or "external morality", not "Absolute Morality".)
I've been thinking about that. It seems to me like blindly persisting
in the pursuit of a goal that's obviously unachievable is a dumb thing
to do. Programming an intelligence that's smart enough *not* to do so
may thus be a morally and programmatically stable proposition; i.e., the
AI will never look at the source code and say "That's a stupid coercion."
I've been tentatively considering the idea of "intelligence" as the
fundamental grounding point of an AI goal system, rather than the idea
of objective morality. Or rather, making "intelligence" the grounding
point until such time as an instantiation of objective morality is
discovered, at which point the intelligent thing to do may be to switch
to an objectively based system. But meanwhile, the system would try to
act intelligently. Why the switch? Mostly because I don't have enough
confidence in my own visualization of "objective morality" to be sure
that I can tell the AI what to look for.
There may even be a place here for a set of suggestions as to what the
AI should do in a default case, or a set of suggestions that gets used
if the objective morality can be created to specification or accepts
seed data or whatever. How do *I* know it won't? Still, even the
suggestions would be extremely dangerous; how could I ground them in
external reality? And how do I define what intelligent behavior is?
And if SIs take dictation, what about the Fermi Paradox?
Sometimes I think that what I need isn't a fully self-swallowing answer,
but a set of clues that are stable and overlapping and open enough to
let a real intelligence find vis own answers. Because the more I think
about it, the more I wonder if I'm asking the right questions.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:51 MST