Re: Goals was: Transparency and IP

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Sep 14 2000 - 16:41:53 MDT


One more comment here - a lot of concerns are about mistakes in interpreting
the Sysop Instructions, mistakes that are obviously wrong to us, but perhaps
we find it hard to say why - it seems "logical", in the Spockian sense.

I'd just like to say that to some extent this problem is an extension of the
same problem we have when imagining a "triangular lightbulb" - go ahead, try
it. A triangular lightbulb has five or four sides, doesn't it? Cyc wouldn't
know that.

What I'm trying to say is that we will *not* encounter this class of problem
for the very first time when dealing with the lofty Sysop Instructions. More
like when we try to get the AI to count to three.

Give me credit for hubris, at least: I would like to not merely *solve* the
problem of wisdom and commonsense in cognition-in-general and
Friendly-AI-in-particular, but *oversolve* it - come up with a cognitive
architecture that is vastly more common-sensical, wise, tamper-resistant,
error-resistant, stupidity-resistant, et cetera, then we ourselves would be in
a similar situation.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:59 MST