From: Wei Dai (weidai@eskimo.com)
Date: Tue Sep 12 2000 - 22:32:09 MDT
Robin wrote earlier:
> There is less scope for being "right" in disagreements about values.
> Once we understand what we want, and opponents decide they don't want
> that, there isn't that much more to say to them.
People's values seem to be derived from some set of fundamental values,
and the facts that they believe to be true. So we can convince people
that we are "right" about values in two ways. The first is to convince
them of a new set of facts, and the second is to convince them that their
derived values are not consistent with their fundamental values and their
beliefs about facts. As an example, if you convince a theist that there
are no gods, that would probably change a number of his values.
I'm not sure if there is a standard theory of how values change, but it
isn't hard to construct one that fits in nicely with the standard theory
of how beliefs about facts change (i.e. the Bayesian theory of
probability). Start with a utility function U that describes the
fundamental values. U maps a state of the universe x to a real number
U(x) representing how desirable that state is. Define V(U,P,K) to be sum
over all x of U(x)*P(x|K). We can now compute someone's value on any
statement s (i.e., how much he desires that statement to be true) as
V(U,P,K and s) where U is his fundamental values, P is his prior
probability distribution, and K is all of his past experiences.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:30:56 MST