From: Lee Corbin (lcorbin@tsoft.com)
Date: Tue Jul 02 2002 - 05:38:26 MDT
Eliezer writes
> > Now *that* I understand! So wouldn't it make sense to *first* have
> > an AI *understand* what many people have believed to be "moral" and
> > "ethical" behavior---by reading the world's extensive literature---
> > before attempting to engage in moral behavior, or even to try to
> > empathize?
>
> You can only build Friendly AI out of whatever building blocks the
> AI is capable of understanding at that time. So if you're waiting
> until the AI can read and understand the world's extensive literature
> on morality... well, that's probably [suicide].
I'm pretty sure that Cyc's goal has been for a long time to be
able to read random newspapers, stories, and net info. It's
probably the common urge (as it would be with me) to first get
the program capable of understanding, and only then to permit
it to affect the world. And, of course, before taking that
last giant step, implementing friendliness would be necessary.
Your reordering of these priorities, if I understand you right,
is most interesting, placing Friendliness first. I'll probably
have to study your architecture to understand how friendliness
could arise before understanding.
Lee
> Once the AI can read the world's extensive literature on morality and
> actually understand it, you can use it as building blocks in Friendly AI
> or even try to shift the definition to rest on it, but you have to use
> more AI-understandable concepts to build the Friendliness architecture
> that the AI uses to get to that point.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:40 MDT