SOC: Tractatus Theopoliticus (was: Is vs. Ought)

From: GBurch1@aol.com
Date: Thu Nov 25 1999 - 06:46:15 MST


In a message dated 99-11-21 12:04:47 EST, sentience@pobox.com (Eliezer S.
Yudkowsky) wrote:

> Greg Burch said:
> >
> > This smacks of the kind of naive "scientific" approach to society that
one
> > finds in Marx and his followers. Substitute "the vanguard party" for
"SI",
> > and you have the kind of elitist, "we know what's best for society"
> mentality
> > that lead inevitably to Bolshevism.
>
> As a meme, maybe. As a reality, never.

"Never" seems like an extremely powerful statement to make in this context,
Eliezer. Just so that it's clear, are you saying that there is no question
in your mind that letting an SI run human affairs is preferable to any
arrangement of society humans might work out on their own?

> <snip> The reality of a state run by Bolsheviks and the reality of a world
> rewritten by an SI would be utterly, unimaginably different.

I'll grant you that it will be different, but I've never been convinced that
the world(s) that SIs might create would be "unimaginably" different from the
one we know now. Just as folks like Robin and Robert and Anders can take
what seem to be the fundamental physical nature of reality and the basic
structure of information theory and make rational projections of the nature
and behavior of vastly more powerful minds, I think it might well be possible
to make the same kind of informed speculation about what sorts of societies
such entities might create. Economics and game theory - and yes, even
biology and history - provide some pretty powerful tools for addressing such
issues.

> Humans
> have always been arguing that their political parties know best. It's
> human nature. The whole point of building an SI is to get out of the
> trap by transcending human nature.
  
I understand that you are deeply convinced that making an SI that basically
"takes over" is the "one best way" forward to the future, but I'm not. For
the record, "taking over" is the part of which I'm not at all certain; not
that "we" humans would be able to successfully oppose such a "take-over", but
rather that it might be more difficult than you imagine to construct an
effective SI that will WANT to "take over" human affairs.

> We don't trust humans who claim to know best, because we know that
> humans have evolved to believe they know what's best and then abuse that
> power for their own benefit. But to extend this heuristic to SIs
> borders on the absurd. And that makes <turn power over to SIs>
> different from <turn power over to me> in practice as well. The latter
> says "Keep playing the game, but give me more points"; the former says
> "Smash the damn game to pieces."

Some observations about advocating "smashing" the status quo. First, such
rhetoric is inherently oppositional and confrontational. To me, such
rhetoric doesn't seem conducive to, for instance, attracting investment.
Second, the use of such rhetoric seems to me to cultivate the kind of
revolutionary mind-set that has, in the past at least, been ill-suited to
seeing alternatives and fruitful contradictions. Social revolutionaries tend
to be single-minded, and single-mindedness doesn't lend itself to the
scientific cast of mind that is open to new possibilities. This is why the
Hollywood "mad scientist" caricature has always seemed so unrealistic to me.
  
> > I honestly can't imagine what process
> > you're picturing this SI would engage in to make a "scientific" decision.
>
> But that doesn't preclude the SI doing so! The whole foundation of
> "letting SIs decide" is the belief that somewhere out there is some
> completely obvious answer to all of the philosophical questions that
> perplex us.

This statement smacks of Platonism to me, but I could be misled by your
rhetoric and be missing some deeper truth in what you seem to be advocating
as a social policy. But consider the possibility that in fact we've already
discerned some truths that make the kind of "completely obvious answer" to
social and moral questions literally impossible.

Again, I refer to Damien's post in this thread in which he referred to chaos
and complexity theory. In particular, consider the strong objections we can
now make to the possibility of true omniscience (in the traditional religious
sense) based on information theory: Any system capable of actually predicting
the physical future course of the entire universe would itself have to be a
physical information structure at least as complex as the universe itself, a
contradiction in terms on the most fundamental level for two reasons. First,
on a purely logical basis, a universe-simulator would be part of the universe
and would not be able to devote resources to both predicting the not-self
parts of the universe and the part that makes up itself. Second, on more
practical physical grounds, such a system, constrained by the limits of
light-speed, could not operate even a perfectly accurate model of the
not-self parts of the universe at sufficient rates to make accurate
predictions.

This abstract thought problem has important implications for what you seem to
be advocating, which is to me just a technologically updated version of
Plato's Republic, i.e. rule by "the best" or some AI-philosopher-king. The
most fundamental conflict in the ur-thought of Western political theory was
between what might be called the basilica and the agora (to mix Latin and
Greek metaphors), i.e. between rule by one or a few with superior knowledge
and power on the one hand, and rule by the open, on-going adjustment of
social relations by the inter-workings of all members of a society on the
other hand. This conflict has played itself out in every age and every
society, in my opinion, and seems to me to be THE fundamental conflict in
political science (and individual moral philosophy, for that matter).

This basic thema is at the heart of my objection to the "let the SI(s)
decide" notion. Yes, I can imagine an entity with vastly superior knowledge
and power, but I cannot imagine one with PERFECT knowledge and power. And it
seems that any power with less than perfect knowledge and power is inferior
to "the agora" as a means of governing the affairs of sentient beings. Those
matters in which the imagined SI-philosopher-king has imperfect knowledge and
power will accumulate errors, inevitably. And those accumulated errors will
become the seeds of what, for want of a better word, I will call
"unhappiness", that will grow and give birth to further "unhappinesses",
cascading throughout whatever social system you can envision. Whether you
use the word "disutility" or "inefficiency" or "injustice", your super-ruler
simply cannot KNOW enough to adjust EVERYTHING to some ideal state. When one
considers that the super-ruler will also have imperfect POWER, i.e. imperfect
abilities to effectuate its goals, the problem only becomes more severe.

If you concede that the simple complexity of "society" (i.e. a world of
multiple independent intentional actors) will inevitably overwhelm the
abilities of even the most powerful super-duper-AI-philosopher-king, then I
do not see how you can advocate the kind of AI-monarchy you seem to imagine.
Borgism or the notion of a "Singleton" to use Nick Bostrom's term, doesn't
resolve the problem, even if you were willing to advocate such a radical
"solution": Doing so only moves into the realm of psychology for the
Singleton what were previously social problems for a non-unitary being.

> The very fact that we, the high-IQ Extropians, are
> discussing this question in the late twentieth century, and making more
> sense than eleventh-century philosophers or twentieth-century
> postmodernists, should lead us to conclude that intelligence does play a
> part in moral decisions - not just with respect to internal consistency
> and game-theoretical stability, but with respect to the basic
> foundations of morality. The very fact that we are making more sense
> than Neanderthals (or frogs) should lead us to conclude that
> intelligence plays a part. What lies beyond the human debate? I don't
> know, but I think it's reasonable to hope that beyond humanity lies the
> truth. And if I'm wrong, well, this course of action is as good as any.
  
This is where I disagree with you, because I have studied the history of
revolutions closely. I know that you protest that the heart of your
revolutionary "one best way" is fundamentally different from all of those
that went before, but I honestly don't think so. It may well be that
"scientific socialism" was superior to the "divine right of kings" or Plato's
"rule by the best" that went before, but tyranny, even benevolent tyranny,
has within it a fundamental logical flaw that will track through to undermine
even the most excellent super-duper-hyper-smart-mega-powerful-SI. You - or
an SI - can only "rule" the world for very brief times and, after all, with a
very constrained definition of "the world". Multiple intentional actors - be
they citizens or agents within a single mind - can be "governed" but
ultimately cannot be successfully "ruled", because any imaginable "ruler"
simply cannot KNOW enough or DO enough to really RULE.

     Greg Burch <GBurch1@aol.com>----<gburch@lockeliddell.com>
      Attorney ::: Vice President, Extropy Institute ::: Wilderness Guide
      http://users.aol.com/gburch1 -or- http://members.aol.com/gburch1
                         "Civilization is protest against nature;
                  progress requires us to take control of evolution."
                                           Thomas Huxley



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:51 MST