Re: Galaxy Brain Problem

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Aug 13 1997 - 21:23:13 MDT


Anders Sandberg wrote:
>
> On Wed, 13 Aug 1997, Geoff Smith wrote:
>
> > As someone in another thread pointed out, game theory does not
> > apply to post-singularity entities.
>
> Huh? Could this someone please explain why it would not apply after
> the singularity? Many situations in game theory will not be changed
> if the players are ultra-intelligent (in fact, game theory often
> assumes the players are very rational, more rational than most humans
> are).

That was me, in Re: [2] Freedom or death. I suggested a non-Hofstadterian
solution to the Power vs. Power Prisoner's Dilemna. I also wondered if the
Powers would use game theory at all, when deciding whether to respect our
opinions. In the question of forced uploading, we might appear as wholly
deterministic phenomena in a one-shot PD.

[Repost of relevant section:]

I'm not sure that game theory applies to posthumans. It's based on
rather tenuous assumptions about lack of knowledge and conflicting goals. It
works fine for our cognitive architecture, but you can sort of see how it
breaks down. Take the Prisoner's Dilemna. The famous Hofstadterian
resolution is to assume that the actions of the other are dependent on yours,
regardless of the lack of communication. In a human Prisoner's Dilemna, this,
alas, isn't true - but assuming that it is, pretending that it is, is the way out.

While a Power might set up a segregated, logical line of reasoning that would,
as a Turing machine, inevitably be the same as the reasoning used by the other
partner... so that the two would inevitably arrive at the same decision.

The problem is that this doesn't work for a "human vs. Power" Prisoner's
Dilemna. The Power isn't pretending anything. It isn't acting out of respect
for anyone's motives. It isn't giving slack. It isn't following a
Tit-For-Tat strategy. It *knows*. A Power in a human/Power PD might be able
to work out the human's entire line of logic, deterministically, in advance,
and then - regardless of what the human would do - defect. (Or it might
cooperate. There are no real-life Prisoners' Dilemnas. Defecting always has
continuing repercussions.)

But in the case of involuntary uploading, the Power might well disregard our
opinions entirely. It *knows* what is wrong and what is right, in terms of
ultimate meaning. It *knows* we're wrong. Unlike a human, it has no chance
of being wrong - not of schizophrenia, not of being in an elaborate
hallucinated test, *nothing*. We can never acquire that logic, being unsuited
to it by evolution, however seductive that logic may seem.

Given that all Powers share exactly the same goals with no conflict arising
even as a hypothetical, or that they can use the above
identical-line-of-reasoning logic to ensure that no uncertainty ever arises...
given that there is never a conflict of opinion... then the Powers have no
need for game theory! Even if some of the above conditions are violated,
they'd still have no need for game theory with respect for humans. What are
we going to do? Say, "Bad posthumans! No biscuit!" Does respect for
another's motives apply when you can simulate, and intimately understand, a
neural or atomic-level model of that person's brain?

-- 
         sentience@pobox.com      Eliezer S. Yudkowsky
          http://tezcat.com/~eliezer/singularity.html
           http://tezcat.com/~eliezer/algernon.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:43 MST