GBurch1@aol.com wrote:
>
> "Never" seems like an extremely powerful statement to make in this context,
> Eliezer. Just so that it's clear, are you saying that there is no question
> in your mind that letting an SI run human affairs is preferable to any
> arrangement of society humans might work out on their own?
Unless the motives of an SI are subject to arbitrary configuration or insanity, which would theoretically allow a sufficiently stupid person to create an eternal hell-world, then I am sure beyond the point where caveats are necessary. In practice, I'm sure enough that you can rip my arm off if I'm wrong. (David Gerrold's Criterion.)
> > <snip> The reality of a state run by Bolsheviks and the reality of a world
> > rewritten by an SI would be utterly, unimaginably different.
>
> I'll grant you that it will be different, but I've never been convinced that
> the world(s) that SIs might create would be "unimaginably" different from the
> one we know now. Just as folks like Robin and Robert and Anders can take
> what seem to be the fundamental physical nature of reality and the basic
> structure of information theory and make rational projections of the nature
> and behavior of vastly more powerful minds, I think it might well be possible
> to make the same kind of informed speculation about what sorts of societies
> such entities might create. Economics and game theory - and yes, even
> biology and history - provide some pretty powerful tools for addressing such
> issues.
I've just finished reading Greg Egan's _Diaspora_. What really struck me was just how *human* all the characters were, despite their autopotence and exoselves and so on. They had human emotions, and human stupidities, and never really displayed a flash of anything beyond.
Sure, Sandberg and Hanson and co. can argue all they want. But ask yourself this question: Even if Greg Egan is completely right, even if Sandberg and Hanson and company are completely right, would *anyone* in the 19th century have seen it coming? Not "could", in theory: "Would", in practice. Because here we are, in practice, and I don't think it's plausible that this is the very first generation in all of history to understand what all the factors are.
> > Humans
> > have always been arguing that their political parties know best. It's
> > human nature. The whole point of building an SI is to get out of the
> > trap by transcending human nature.
>
> I understand that you are deeply convinced that making an SI that basically
> "takes over" is the "one best way" forward to the future, but I'm not. For
> the record, "taking over" is the part of which I'm not at all certain; not
> that "we" humans would be able to successfully oppose such a "take-over", but
> rather that it might be more difficult than you imagine to construct an
> effective SI that will WANT to "take over" human affairs.
I'm not going to second-guess the SI. I believe that the SI will do whatever I would do if I was only intelligent enough to see the necessity. I don't know what the motives of SI may be, but I identify with SIs, as a child might identify with vis adult self.
> > We don't trust humans who claim to know best, because we know that
> > humans have evolved to believe they know what's best and then abuse that
> > power for their own benefit. But to extend this heuristic to SIs
> > borders on the absurd. And that makes <turn power over to SIs>
> > different from <turn power over to me> in practice as well. The latter
> > says "Keep playing the game, but give me more points"; the former says
> > "Smash the damn game to pieces."
>
> Some observations about advocating "smashing" the status quo. First, such
> rhetoric is inherently oppositional and confrontational. To me, such
> rhetoric doesn't seem conducive to, for instance, attracting investment.
I don't mean it that way. I mean it in the sense of, say, dropping a 16-ton weight on an eggshell. No fighting. No fuss. No violence. No human-level social connotations at all. Just an irresistable force and a small squishing noise.
> Second, the use of such rhetoric seems to me to cultivate the kind of
> revolutionary mind-set that has, in the past at least, been ill-suited to
> seeing alternatives and fruitful contradictions. Social revolutionaries tend
> to be single-minded, and single-mindedness doesn't lend itself to the
> scientific cast of mind that is open to new possibilities. This is why the
> Hollywood "mad scientist" caricature has always seemed so unrealistic to me.
Maybe. There are limits to how much time I'm willing to spend worrying about the ways twentieth-century American culture can misinterpret me.
> > But that doesn't preclude the SI doing so! The whole foundation of
> > "letting SIs decide" is the belief that somewhere out there is some
> > completely obvious answer to all of the philosophical questions that
> > perplex us.
>
> This statement smacks of Platonism to me, but I could be misled by your
> rhetoric and be missing some deeper truth in what you seem to be advocating
> as a social policy.
Again, I think that once again we have the basic divide between "life goes on" and "walking into the unknowable". I am not advocating a social policy. I am not advocating anything human, and the whole concept of "social policy" is a very human thing.
> But consider the possibility that in fact we've already
> discerned some truths that make the kind of "completely obvious answer" to
> social and moral questions literally impossible.
I disagree. I don't think we've discerned much of anything. Even I have nothing but a set of ways to detect anthropomorphisms. When it comes to positive statements - hell, I don't even know what causality really is, so how am I supposed to understand goals?
> Again, I refer to Damien's post in this thread in which he referred to chaos
> and complexity theory. In particular, consider the strong objections we can
> now make to the possibility of true omniscience (in the traditional religious
> sense) based on information theory: Any system capable of actually predicting
> the physical future course of the entire universe would itself have to be a
> physical information structure at least as complex as the universe itself, a
> contradiction in terms on the most fundamental level for two reasons. First,
> on a purely logical basis, a universe-simulator would be part of the universe
> and would not be able to devote resources to both predicting the not-self
> parts of the universe and the part that makes up itself. Second, on more
> practical physical grounds, such a system, constrained by the limits of
> light-speed, could not operate even a perfectly accurate model of the
> not-self parts of the universe at sufficient rates to make accurate
> predictions.
Remember, I'm a noncomputationalist. The only honest answer I can give to that statement is "The Universe is not a Turing machine, and that reasoning only works on Turing machines."
> This abstract thought problem has important implications for what you seem to
> be advocating, which is to me just a technologically updated version of
> Plato's Republic, i.e. rule by "the best" or some AI-philosopher-king. The
> most fundamental conflict in the ur-thought of Western political theory was
> between what might be called the basilica and the agora (to mix Latin and
> Greek metaphors), i.e. between rule by one or a few with superior knowledge
> and power on the one hand, and rule by the open, on-going adjustment of
> social relations by the inter-workings of all members of a society on the
> other hand. This conflict has played itself out in every age and every
> society, in my opinion, and seems to me to be THE fundamental conflict in
> political science (and individual moral philosophy, for that matter).
Yes, but there wasn't even the theoretical possibility, at any point, of rule by anyone other than Cro-Magnons. And Cro-Magnons are, universally, evolved to contain special-purpose cognitive rules for reasoning about politics, with little hidden traps that cause every person to believe ve is the superior philosopher-king and fail to be one. It's all entirely explicable in terms of evolutionary psychology. And within that Integrated Causal Model, within that greater model of the Universe that contains and explains fifty thousand years of failure, we have every reason to believe that AIs will be on the outside.
Unless the initial programming makes a difference, in which case all bets are off.
> This basic thema is at the heart of my objection to the "let the SI(s)
> decide" notion. Yes, I can imagine an entity with vastly superior knowledge
> and power, but I cannot imagine one with PERFECT knowledge and power.
I can, but never mind. Why is perfection necessary? I'll take 99.99% accuracy any day.
> And it
> seems that any power with less than perfect knowledge and power is inferior
> to "the agora" as a means of governing the affairs of sentient beings.
Fine. Let's say the SI is programmed to simulate an enlightened, honest, fair, kindhearted, and absolutely informed agora. Wanna bet it couldn't beat the pants off of the modern American government?
> Those
> matters in which the imagined SI-philosopher-king has imperfect knowledge and
> power will accumulate errors, inevitably.
Why should errors accumulate instead of cancelling out?
> And those accumulated errors will
> become the seeds of what, for want of a better word, I will call
> "unhappiness", that will grow and give birth to further "unhappinesses",
> cascading throughout whatever social system you can envision.
Once an error becomes large enough for humans to perceive, it is large enough for any SI remotely worthy of the phrase to notice and correct.
I mean, why doesn't this whole argument prove that thermostats will never match a free democracy that votes whether or not to turn the heater on?
> Whether you
> use the word "disutility" or "inefficiency" or "injustice", your super-ruler
> simply cannot KNOW enough to adjust EVERYTHING to some ideal state.
Even supposing this to be true, an SI could easily attain a high enough resolution to prevent any humanly noticeable unhappiness from developing. I mean, let's say you upload the whole human race. Now it runs on the mass of a basketball. If the SI can use the rest of the Earth-mass to think about that one basketball, I guarantee you it can attain a perceptual granularity a lot better than that of the humans running on it.
> When one
> considers that the super-ruler will also have imperfect POWER, i.e. imperfect
> abilities to effectuate its goals, the problem only becomes more severe.
Complete control over the processors running humanity isn't enough power for you?
> If you concede that the simple complexity of "society" (i.e. a world of
> multiple independent intentional actors) will inevitably overwhelm the
> abilities of even the most powerful super-duper-AI-philosopher-king, then I
> do not see how you can advocate the kind of AI-monarchy you seem to imagine.
What complexity? Just because it looks complex to you doesn't mean that it's complex to the SI. I occasionally get flashes of the level on which human minds are made up of understandable parts, and believe me, society is a lot less complex than that. Less communications bandwidth between the elements.
> Borgism or the notion of a "Singleton" to use Nick Bostrom's term, doesn't
> resolve the problem, even if you were willing to advocate such a radical
> "solution": Doing so only moves into the realm of psychology for the
> Singleton what were previously social problems for a non-unitary being.
Sounds like a pretty large improvement to me.
> > The very fact that we, the high-IQ Extropians, are
> > discussing this question in the late twentieth century, and making more
> > sense than eleventh-century philosophers or twentieth-century
> > postmodernists, should lead us to conclude that intelligence does play a
> > part in moral decisions - not just with respect to internal consistency
> > and game-theoretical stability, but with respect to the basic
> > foundations of morality. The very fact that we are making more sense
> > than Neanderthals (or frogs) should lead us to conclude that
> > intelligence plays a part. What lies beyond the human debate? I don't
> > know, but I think it's reasonable to hope that beyond humanity lies the
> > truth. And if I'm wrong, well, this course of action is as good as any.
>
> This is where I disagree with you, because I have studied the history of
> revolutions closely. I know that you protest that the heart of your
> revolutionary "one best way" is fundamentally different from all of those
> that went before, but I honestly don't think so. It may well be that
> "scientific socialism" was superior to the "divine right of kings" or Plato's
> "rule by the best" that went before, but tyranny, even benevolent tyranny,
> has within it a fundamental logical flaw that will track through to undermine
> even the most excellent super-duper-hyper-smart-mega-powerful-SI. You - or
> an SI - can only "rule" the world for very brief times and, after all, with a
> very constrained definition of "the world". Multiple intentional actors - be
> they citizens or agents within a single mind - can be "governed" but
> ultimately cannot be successfully "ruled", because any imaginable "ruler"
> simply cannot KNOW enough or DO enough to really RULE.
Is this a true argument?
Would it be obvious to a sufficiently intelligent being?
Then what will happen, in your visualization, is that I create an SI,
and it adopts Greg Burch's personal philosophy and does exactly what you
want it to for exactly the reasons you cited.
I really can't lose.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way