Re: Fwd: Earthweb from transadmin

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Sep 18 2000 - 13:00:55 MDT


Eugene Leitl wrote:
>
> So, please tell me how you can predict the growth of the core

I do not propose to predict the growth of the core.

Commonsense arguments are enough. If you like, you can think of the
commonsense arguments as referring to fuzzily-bordered probability volumes in
the Hamiltonian space of possibilities, but I don't see how that would
contribute materially to intelligent thinking.

I can predict the behavior of the core in terms of ternary logic: Either it's
friendly, or it's not friendly, or I have failed to understand What's Going
On.

All else being equal, it should be friendly.

> Tell me how a piece of "code" during the bootstrap process and
> afterwards can formally predict what another piece of "code"

I do not propose to make formal predictions of any type. Intelligence
exploits the regularities in reality; these regularities can be formalized as
fuzzily-bordered volumes of phase space - say, the space of possible minds
that can be described as "friendly" - but this formalization adds nothing.
Build an AI right smack in the middle of "friendly space" and it doesn't
matter how what kind of sophistries you can raise around the edges.

I cannot formally predict the molecular behavior of a skyscraper; the
definition of skyscraper is not formally definable around the edges; I can
still tell the difference between a skyscraper and a hut.

> Tell me how a team of human programmers is supposed to break through
> the complexity bareer while building the seed AI without resorting to
> evolutionary algorithms

We've been through this.

Evolution is the degenerate case of intelligent design in which intelligence
equals zero. If I happen to have a seed AI lying around, why should it be
testing millions of unintelligent mutations when it could be testing millions
of intelligent mutations?

> Tell me how a single distributed monode can arbitrate synchronous
> events separated by light seconds, minutes, hours, years, megayears
> distances without having to resort to relativistic signalling.

You confuse computational architecture with cognitive coherence and
motivational coherence.

> If it's not single, tell me what other nodes will do with a node's
> decision they consider not kosher, and how they enforce it.

I do not expect motivational conflicts to arise due to distributed processing,
any more than I expect different nodes to come up with different laws of
arithmetic.

> Tell me how the thing is guarded against spontaneous emergence of
> autoreplicators in its very fabric, and from invasion of alien
> autoreplicators from the outside.

Solar Defense is the Sysop's problem; I fail to see why this problem is
particularly more urgent for the Sysop Scenario then in any of the other
possible futures.

> Tell me how many operations the thing will need to sample all possible
> trajectories on the behaviour of the society as a whole (sounds
> NP-complete to me), to pick the best of all possible worlds. (And will
> it mean that all of us will have to till our virtual gardens?)

I don't understand why you think I'm proposing such a thing. I am not
proposing to instruct the Sysop to create the best of all possible worlds; I
am proposing that building a Sysop instructed to be friendly while preserving
individual rights is the best possible world *I* can attempt to create.

> What is the proposed temporal scope of the prediction horizont?
> Minutes? Hours? Years?

Again, explain to me what the Sysop is predicting and why it needs to predict
it. I can predict that the Sun will not naturally explode; this prediction
horizon has a million years and Godel be damned.

> How can you decide what the long-term impact of an event in the here
> and now is?

Crossing the street is pretty long-term from the Planck-time perspective, yet
somehow you manage not to get hit by any cars. Exercise some common sense.

> There's more, but I'm finished for now. If you can argue all of above
> points convincingly (no handwaving please), I might start to consider
> that there's something more to your proposal than just hot air. So
> show us the money, instead of constantly pelting the list with many
> redundant descriptions of how wonderful the sysop will be. Frankly,
> I'm getting sick of it.

Frankly, 'gene, I'm starting to get pretty sick of your attitude. Who are you
to decide whether my proposal is hot air? I can't see that it makes the least
bit of difference to the world what you think of my proposal, and frankly, you
have now managed to tick me off. I may consider my AI work to be superior to
yours, but I don't propose that you have a responsibility to convince me of
one damn thing. I expect to be extended the same courtesy.

Sincerely,
Eliezer.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:31:02 MST