From: Vladimir Nesov (robotact@gmail.com)
Date: Mon Apr 21 2008 - 13:33:34 MDT
On Mon, Apr 21, 2008 at 10:52 PM, Matt Mahoney <matmahoney@yahoo.com> wrote:
>
> Interesting. Perhaps the change in the results is because in modern times
> people are more aware of how the rich live in other countries. American
> movies, TV and culture are seen all over the world. If you believe you found
> x that maximizes U(x) then you are content. If you discover it is a local
> maximum (because somebody else has a higher U(x)), then you are less happy.
>
Matt,
You talk about these utilities (with underspecified meaning) as if
people actually choose their decisions based on them, as if they hold
causal powers. But in fact, it's the opposite: utilities are a way to
roughly model human behavior. At best this formalism can be considered
as a way to describe an ideal utilitarian AI. People are not
fitness-maximizers or utility-maximizers, they are a hack of
adaptation-executing. They learn when to be happy, and when suicidal,
depending on context (not that it's easy to control such learning).
You use "U(x)" thingie like "phlogiston" to create an illusion of
justified argument.
-- Vladimir Nesov robotact@gmail.com
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:02 MDT