From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jul 17 2005 - 11:04:56 MDT
> On Sun, 2005-07-17 at 06:22 -0700, William Chapin wrote:
>
>>How are 'utility function' and 'Super Goal'
>>incompatible? My analogy: Utility function - balance
>>my checkbook and work out a budget to achieve Super
>>Goal - save up for that [insert favorite diversion here].
No, Chris Capel's definition was in error. The utility function is over
*final outcomes*. It doesn't tell you how to achieve those outcomes - that
requires predicting which outcomes probably result from which actions.
Peter de Blanc wrote:
> 'Balance my checkbook' is not a utility function. A utility function
> would be something like:
>
> f(W) = 1 if my checkbook is balanced in W, 0 if my checkbook is not
> balanced in W,
>
> where W is a member of the set of all possible universes.
Correct.
> Another example of a utility function is the board evaluation function
> used in a Chess program. You're probably very familiar with it already,
> but in case you're not, look for a simple Chess program which uses the
> minimax algorithm. I suspect most CS students have had to code one at
> some point. An RPOP is something very similar in spirit to this kind of
> Chess program, but with more powerful inference.
Not quite. The utility function for playing Chess would be a function over
final board positions, with utilities Win = 1, Lose = 0, Draw = 1/2.
The function used to evaluate board positions is called an "evaluation
function", which attempts to approximate the *expected* utility of the board
position. Furthermore, standard chess algorithms differ from standard
decision theory in that they minimax (always assume the opponent will make the
worst possible move from their perspective) rather than assigning probability
distributions over opponent's moves. So an RPOP is not like a chess program
because an RPOP operates in a realm of probabilistic uncertainty as to board
positions and board rules, and because uncertainty is resolved by neutral
Nature rather than moves made by a zero-sum opponent.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:51 MDT