Re: Fwd: Earthweb from transadmin

From: Matt Gingell (mjg223@nyu.edu)
Date: Mon Sep 18 2000 - 21:57:14 MDT


Eliezer S. Yudkowsky writes:
> Eugene Leitl wrote:
> >
> > So, please tell me how you can predict the growth of the core
>
> I do not propose to predict the growth of the core.
>
> Commonsense arguments are enough. If you like, you can think of the
> commonsense arguments as referring to fuzzily-bordered probability volumes in
> the Hamiltonian space of possibilities, but I don't see how that would
> contribute materially to intelligent thinking.

 [snip]

> > Tell me how a piece of "code" during the bootstrap process and
> > afterwards can formally predict what another piece of "code"
>
> I do not propose to make formal predictions of any type. Intelligence
> exploits the regularities in reality; these regularities can be formalized as
> fuzzily-bordered volumes of phase space - say, the space of possible minds
> that can be described as "friendly" - but this formalization adds nothing.
> Build an AI right smack in the middle of "friendly space" and it doesn't
> matter how what kind of sophistries you can raise around the edges.

 Actually, it's quite useful, or at least it makes clear what's being
 argued about: Posit a space of brains, each paired with a coordinate
 specifying the successor that brain will design. Call it a trajectory
 set or a phase space or a manifold or whatever else you feel like
 calling it.

 You seem to think there's a dense, more-or-less coherent, domain of
 attraction with a happily-ever-after stable point in the middle. A
 blotch of white paint with some grey bleed around the edges. If we
 start close to the center, our AI can orbit around as much as it
 likes but never escape into not-nice space. The boundaries are fuzzy,
 but that's unimportant: I can know the Empire State Building is a
 skyscraper without knowing exactly how many floors it has, or having
 any good reason for believing a twelve story apartment building
 isn't. So long as we're careful to start somewhere unambiguous, we
 don't have to worry about formal definitions or prove nothing nasty
 is going to happen.

 Eugene, on the other hand, seems to think this is not the case.
 Rather, the friendly and unfriendly subspaces are embedded in each
 other, and a single chaotic step in the wrong direction shoots a
 thread off into unpredictability. More like intertwined balls of
 turbulent Cantor-String, knotted together at infinitely many
 meta-stable catastrophe points, than like the comforting whirlpool of
 nice brains designing ever nicer ones. Your final destination is
 still a function of where you start, but it's so sensitive to initial
 conditions it depends more on the floating point rounding model you
 used on your first build than it does on anything you actually though
 about.

 The later seems more plausible, my being a pessimist, the trajectory
 to hell being paved with locally-good intentions, etc. But who knows?
 That any prediction we're capable of making is necessarily wrong
 seems like a reasonable rule of thumb. You're the last person I'd
 expect to see make an appeal to common sense...

> Evolution is the degenerate case of intelligent design in which intelligence
> equals zero. If I happen to have a seed AI lying around, why should it be
> testing millions of unintelligent mutations when it could be testing millions
> of intelligent mutations?

 Intelligent design without intelligence is exactly what makes
 evolution such an interesting bootstrap: It doesn't beg the question
 of how to build an intelligent machine by assuming you happen to have
 one lying around already.

> > Tell me how the thing is guarded against spontaneous emergence of
> > autoreplicators in its very fabric, and from invasion of alien
> > autoreplicators from the outside.
>
> Solar Defense is the Sysop's problem; I fail to see why this problem is
> particularly more urgent for the Sysop Scenario then in any of the other
> possible futures.

 Eugene is talking, I think, about parasitic memes and mental illness
 here, not space invaders.

> > Tell me how many operations the thing will need to sample all possible
> > trajectories on the behaviour of the society as a whole (sounds
> > NP-complete to me), to pick the best of all possible worlds. (And will
> > it mean that all of us will have to till our virtual gardens?)
>
> I don't understand why you think I'm proposing such a thing. I am not
> proposing to instruct the Sysop to create the best of all possible worlds; I
> am proposing that building a Sysop instructed to be friendly while preserving
> individual rights is the best possible world *I* can attempt to create.

 The best bad option is better than the rest, I suppose, and if seed
 AI ends up being the way that works then it certainly can't hurt to
 program some affection into it. But it's rather like thinking a
 friendly, symbiotic, strain of bacteria is likely to eventually
 evolve into friendly people. The first few steps might preserve
 something of our initial intent, but none of the well-meaning
 intermediates is going to have any more luck anticipating the
 behavior of a qualitatively better brain than you or I.

 -matt



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:03 MST