> On Wed, 13 Aug 1997, Geoff Smith wrote:
>
> > As someone in another thread pointed out, game theory does not
> > apply to post-singularity entities.
>
> Huh? Could this someone please explain why it would not apply after
> the singularity? Many situations in game theory will not be changed
> if the players are ultra-intelligent (in fact, game theory often
> assumes the players are very rational, more rational than most humans
> are).
>
I was wondering how this idea was put into my head, so I looked
back in the list a bit, and found this quote from Eliezer S. Yudkows:
> I'm just not sure that game theory applies to posthumans. It's based on
> rather tenuous assumptions about lack of knowledge and conflicting
> goals. It works fine for our cognitive architecture, but you can sort
> of see how it breaks down. Take the Prisoner's Dilemna. The famous
> Hofstadterian resolution is to assume that the actions of the other are
> dependent on yours, regardless of the lack of communication. In a human
> Prisoner's Dilemna, this, alas, isn't true - but assuming that it is,
> pretending that it is, is the way out.
So this is the man to talk to ;)
I'm not knowledgeable enough in this area to lean either way.
geoff.