From: James Higgins (jameshiggins@earthlink.net)
Date: Wed Jun 26 2002 - 13:13:18 MDT
At 01:11 PM 6/26/2002 -0400, Eliezer S. Yudkowsky wrote:
>3. Someone offers a goal system in which sensory feedback at various
>levels of control - from "pain" at the physical level to "shame" at the
>top "conscience" level - acts as negative and positive feedback on a
>hierarchical set of control schema, sculpting them into the form that
>minimizes negative and maximizes positive feedback. Given that both
>systems involve the stabilization of cognitive content by external
>feedback, what is the critical difference between this architecture and
>the "external reference semantics" in Friendly AI? How and why will the
>architecture fail?
That strikes me as the technical description of the process to raise a
child to be a good and proper adult. In such a case generally 2 people
spend many years (18 on average) providing positive and negative feedback
to the intelligence in order to shape it into something desirable. Even
when both people are intelligent the outcome is too frequently less than
what they desired, and sometimes it is just plain horrible.
James Higgins
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:39 MDT