From: Pavitra (celestialcognition@gmail.com)
Date: Sat Oct 17 2009 - 00:45:59 MDT
J. Andrew Rogers wrote:
> On Oct 16, 2009, at 10:54 PM, Pavitra wrote:
>> Matt Mahoney wrote:
>>> To satisfy conflicts between people
>>> (e.g. I want your money), AI has to know what everyone knows. Then it
>>> could calculate what an ideal secrecy-free market would do and
>>> allocate resources accordingly.
>>
>> Assuming an ideal secrecy-free market generates the best possible
>> allocation of resources. Unless there's a relevant theorem of ethics
>> I'm
>> not aware of, that seems a nontrivial assumption.
>
> What is your definition of "best possible allocation"?
That's kind of the point. It seems premature to assert that a particular
strategy is the best until we have a working definition of "best" (step A).
> Matt is making
> a pretty pedestrian decision theoretic assertion about AGI. It would
> out-perform real markets in terms of distribution of resources, but
> the allocation would still be market-like because resources would
> still be scarce. It would be as though a smarter version of you was
> making decisions for you.
Umm. I don't see how you get from "ideal market > real market" to "ideal
market >= X, for any X".
In particular, I see why making _my_ decisions ideal and fully-informed
would be good for _me_, but I don't see why it's good that _other_
people's values should be trusted. Are you really advocating that we
should allow people who want to eat babies (the law of large number
implies that at least one exists somewhere on Earth) to be full agents
in our ideal secrecy-free market, with just as much power over the fate
of the universe as you and me?
Coherent extrapolated volition sounds nice politically, but
realistically, I want the AI to act according to *my* values. To the
extent that I care what other people think (which is more than the tone
of this sentence might suggest), I've already updated my desires
accordingly. By definition, I believe that an AI implementing my
personal values exclusively is the best possible AI.
> Ethics has little to do with it.
Perhaps 'morality' would have been more accurate than 'ethics'.
> How much sub-optimality should the
> AGI intentionally insert into decisions, and how does one objectively
> differentiate nominally "bad" suboptimality and nominally "good"
> suboptimality?
None, of course; it should behave as close to optimally as possible. My
objection to the market strategy is that it may optimize partially for
other people's values at the expense of my own.
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:01:05 MDT