From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Sep 24 2000 - 01:29:40 MDT
Zero Powers wrote:
>
> Based upon your belief, I presume, that AI will be so completely unlike
> humanity that there is no way we can even begin to imagine the AI's
> thoughts, values and motivations?
You can't put yourself in the AI's shoes! I'm not saying that you can't
understand it at all; I'm saying that you cannot say "Imagine what YOU would
do in that situation" when you're dealing with motivational effects. *IF*
you've identified *ALL* your assumptions than you *MAY* be able to get away
with "putting yourself in the AI's shoes" if you're trying to decide whether a
given chain of subgoals-given-a-supergoal is stupid or smart. And when I say
"you", I mean "me and Mitchell Porter and maybe three or four other people".
If you're wondering whether the Sysop will exterminate humanity because
someone asked it to solve the Riemann Hypothesis, which it does because it's
friendly, and then it needs the extra resource space so it erases humanity -
then it turns out that yes, the put-yourself-in-the-AI's-shoes happens to work
for this particular case - exterminating someone in order to serve a subgoal
of being friendly is outright stupid. But it's not intrinsically stupid - if
the AI were directly programmed to solve the Riemann Hypothesis, then
something along those lines would be a reasonable outcome. Do you see what
I'm saying here? "Put yourself in the AI's shoes" only works for identifying
subgoals-given-supergoals, not identifying the supergoals themselves.
Oh, why bother. I really am starting to get a bit frustrated over here. I've
been talking about this for weeks and it doesn't seem to have any effect
whatsoever. Nobody is even bothering to distinguish between subgoals and
supergoals. You're all just playing with words.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:09 MST