Re: Posthuman Politics

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Oct 17 2001 - 01:32:25 MDT


"Alex F. Bokov" wrote:
>
> In fact, we already have evidence that superhuman entitites will not
> be friendly. Governments, mega-corporations, and [other] terrorist
> groups demonstrate that collective intelligences (CIs from now on) are
> capable of being....
>
> It ain't looking good, folks. Of course, Eliezer would argue that it's
> because CIs were designed by humans and inherited our evolutionary,
> atavistic, aggressive tendencies.

Actually, I'll argue that CIs are just human individuals. They do not
exhibit greater-than-human intelligence. A corporation of Neanderthals
cannot outsmart a human. It takes an Earthweb a la Stiegler to even start
exhibiting flashes of transhuman ability, and even then, it's still not a
gap of the order that separated us from our Neanderthal cousins. An
Earthweb certainly is not a superintelligence... and a corporation is not
an Earthweb.

I don't see a corporation as a cognitive system at all. The Earthweb has
some claim to being a cognitive system but it is still limited to those
thoughts that a human can originate and represent, even though it can
string a large number of independently originated good ideas into what
looks like superhuman deliberate reasoning.

*None* of these, CIs or even the Earthweb, have any claim to independent
goal-directed reasoning.

> To which I'll reply that...
>
> 1) The social graces that allow us to refrain from beating up people
> who cut in front of us in line or step on our feet have emerged from
> evolutionary forces, specifically iterated Prisoner's Dilemma

Right, that is the complexity that a Friendly AI would target for
absorption.

> 2) It appears that social graces are too complex a behavior to be
> completely instinctual, and we need childhood and adolescence
> to fully develop these faculties. Babysitting an adolescent AI that's
> smarter than you are will be... challenging.

Have you read "The Psychological Foundations of Culture" by John Tooby and
Leda Cosmides, in "The Adapted Mind" by Barkow, Cosmides, and Tooby? Just
because humans make use of childhood and adolescence to grow our innate
social graces does not mean that an AI must do the same.

> 3) I've met Eliezer and he seems human and shaped by evolutionary
> pressures, at least insofar as anybody on this list is. I wish
> him the best in not transferring his human failings onto his
> brainchild. He's a brilliant guy, so he and his collaborators
> just might do it, but I'll remain optimally paranoid for now.

Friendly AI is exactly a method whereby imperfect humans, with evolved
brainware for both good and evil, can transfer only the good parts into an
AI. The ideal / design requirement is that the end result should be
similar to what would happen if an initially altruistic human upload fixed
up vis own evolutionary failings.

As FAI problems go, avoiding accidental transfer of human failings is
pretty straightforward, at least under the CFAI architecture. A Friendly
AI is a deliberate rather than an unconscious copycat, and the AI knows
that programmers can make mistakes.

PS: For the record, can anyone here give a specific example of one of my
human failings? Note that I'm not claiming I don't have them... just idly
wondering whether anyone can actually name an example... because I do have
fewer than usual.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:11:26 MST