} cognitive architectures give goals two values of "justification" and
} "value". Query: Given a cognitive architecture which operates through
} Power-perfect reasoning and does not give an a priori value to any goal,
I'm not sure what you see as the distinction between 'justification' and
'value'. What does negative value mean for goals? How is
"Power-perfect reasoning" different from human reasoning? Can you
really have a goal-less architecture?
} subgoal. Thus the system will produce an "Interim Meaning of Life" with
} positive value: "Find out what the Meaning of Life is."
Of course, there is no guarantee that the system will ever find an
Ultimate Meaning. Halting problem...
Merry part,
-xx- Damien R. Sullivan X-) <*> http://www.ugcs.caltech.edu/~phoenix
"You know, I've gone to a lot of psychics, and they've told me a lot of
different things, but not one of them has ever told me 'You are an
undercover policewoman here to arrest me.'"
-- New York City undercover policewoman