Re: Vegetarianism and Ethics

From: The Low Willow (phoenix@ugcs.caltech.edu)
Date: Sat Dec 14 1996 - 20:36:41 MST


On Dec 14, 6:05pm, Eliezer Yudkowsky wrote:

} cognitive architectures give goals two values of "justification" and
} "value". Query: Given a cognitive architecture which operates through
} Power-perfect reasoning and does not give an a priori value to any goal,

I'm not sure what you see as the distinction between 'justification' and
'value'. What does negative value mean for goals? How is
"Power-perfect reasoning" different from human reasoning? Can you
really have a goal-less architecture?

} subgoal. Thus the system will produce an "Interim Meaning of Life" with
} positive value: "Find out what the Meaning of Life is."
 
Of course, there is no guarantee that the system will ever find an
Ultimate Meaning. Halting problem...

Merry part,
 -xx- Damien R. Sullivan X-) <*> http://www.ugcs.caltech.edu/~phoenix

"You know, I've gone to a lot of psychics, and they've told me a lot of
different things, but not one of them has ever told me 'You are an
undercover policewoman here to arrest me.'"
    -- New York City undercover policewoman



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:53 MST