From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Dec 14 1998 - 14:30:02 MST
Robin Hanson wrote:
>
> Billy Brown wrote:
>
> >If you want to implement a sentient AI, there is no obvious reason to do
> >things this way. It would make more sense to implement as many mechanisms
> >as you like for suggesting possible goals, then have a single system for
> >selecting which ones to pursue.
>
> There is no obvious reason to do things the way humans do, but there is
> no obvious reason not to either. I think it is fair to say that few things
> are obvious about high level AI organization and design.
I would disagree. Few things are obviously true, but many things are
obviously false - the 8-circuit design, to continue stamping madly on the
greasy spot where there used to lie a dead horse.
I would say that humans have many conflicting goal systems, overlapping where
they shouldn't and unlinked where they should. I would demand an extremely
good reason before giving an AI more than one goal system. Multiple
intuitions, yes; multiple goals and multiple intentions; but not more than one
goal system - and absolutely not an silicoendocrine system to duplicate the
stresses we experience as the result of conflict. Even if there are good
evolutionary reasons! It's just too much of a risk.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:50:00 MST