From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Nov 21 1999 - 10:07:28 MST
Greg Burch said:
>
> This smacks of the kind of naive "scientific" approach to society that one
> finds in Marx and his followers. Substitute "the vanguard party" for "SI",
> and you have the kind of elitist, "we know what's best for society" mentality
> that lead inevitably to Bolshevism.
As a meme, maybe. As a reality, never. <Let the SIs decide> may bear a
memetic resemblance to <let Marx decide>, when implemented on naive
hardware, but in reality, the Bolsheviks are human and the SI is not.
The reality of a state run by Bolsheviks and the reality of a world
rewritten by an SI would be utterly, unimaginably different. Humans
have always been arguing that their political parties know best. It's
human nature. The whole point of building an SI is to get out of the
trap by transcending human nature.
We don't trust humans who claim to know best, because we know that
humans have evolved to believe they know what's best and then abuse that
power for their own benefit. But to extend this heuristic to SIs
borders on the absurd. And that makes <turn power over to SIs>
different from <turn power over to me> in practice as well. The latter
says "Keep playing the game, but give me more points"; the former says
"Smash the damn game to pieces."
> I honestly can't imagine what process
> you're picturing this SI would engage in to make a "scientific" decision.
But that doesn't preclude the SI doing so! The whole foundation of
"letting SIs decide" is the belief that somewhere out there is some
completely obvious answer to all of the philosophical questions that
perplex us. The very fact that we, the high-IQ Extropians, are
discussing this question in the late twentieth century, and making more
sense than eleventh-century philosophers or twentieth-century
postmodernists, should lead us to conclude that intelligence does play a
part in moral decisions - not just with respect to internal consistency
and game-theoretical stability, but with respect to the basic
foundations of morality. The very fact that we are making more sense
than Neanderthals (or frogs) should lead us to conclude that
intelligence plays a part. What lies beyond the human debate? I don't
know, but I think it's reasonable to hope that beyond humanity lies the
truth. And if I'm wrong, well, this course of action is as good as any.
> Some kind of balancing of everyone's utility functions based on perfect
> knowledge of their internal brain states? This sounds like one is merely
> substituting "SI" for "god" and "scientific decision making" for "paradise".
Hey, if it works, why argue? Personally I'd hope for something a little
more exciting, like some kind of provably moral grand adventure, but
your scenario also sounds like fun.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:49 MST