Re: Post Singularity Earth

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon May 20 2002 - 09:45:10 MDT


Eugen Leitl wrote:
>
> On Sat, 18 May 2002, Eliezer S. Yudkowsky wrote:
>
> > > I recommend you do a back of the envelope as to resources required to run
> > > the Sysop (even short-scale decisions require sampling a lot of agent
> > > trajectories with very detailed agent models), and the constraints it
> > > imposes on its serfs.
> >
> > Show me the numbers. You have repeatedly declined to provide any
> > justification whatsoever for your assertion that "even short-scale
>
> Repeatedly? Don't recally anyone ever asking. Happy to oblige.
>
> > decisions require sampling a lot of agent trajectories". Now you are
> > claiming the ability to do quantitative modeling of superintelligent
> > resource utilization?
>
> Semiquantitative. I'm kinda busy, so it has to be brief.
>
> Here's the gist of the cost estimation: mature civilisations utilize every
> single atom and every single Joule available. There are no spare
> resources, period.

If we assume inconvenient constraints on matter-energy available, then it
seems to me that rapidly expanding to consume all matter-energy available to
you on an individual may be suboptimal planning for fun optimization, unless
you assume that all matter-energy not consumed by you will be permanently
consumed by another. The ability to have persistent private property and
not a rush to burn the cosmic commons is part of the hypothesized
justification for intelligent substrate scenarios.

> A civilisation is an assembly of agents evolving along
> behaviour trajectories.

"Developing", not "evolving". Individuals follow developmental paths.
Populations may evolve, but populations evolve if and only if natural
selection is the primary determinant of the designs of new individuals and
new individuals dominate the path of the population. This obviously holds
true of pre-Singularity Earth; whether it holds true of a post-Singularity
scenario is far more iffy.

> Your hypothetical despot introduces constraints on
> all behaviour trajectories,

Your use of the term "despot" is a strong cue that you are setting up a
strawman scenario, especially given that the allegedly despotic scenario is
in fact being proposed as a means of *minimizing* the de-facto constraints
on behavior trajectories. For example, as I understand your proposed
alternative scenario, everyone has to immediately chew up all available
matter and expand as fast as possible just to stay in the race, after which
their behaviors are strongly constrained by the need to stay in defense-mode
every minute of the day. I strongly suspect that subjective freedom is far
greater when others cannot threaten your life.

> using a (boolean or scalar) friendliness metric.

Humanly it tends to be scalar. I'm working on the assumption that it starts
as scalar and that the critical philosophical aspect is that it be
well-orderable.

> This metric is hardwired into every agent instance,

Incorrect; strawman argument. Friendliness is a part of the intelligent
substrate, not the citizens. Friendliness is not and cannot be hardwired,
as I have repeatedly said on any number of occasions. "Hardwired" is not a
term that one uses in the discussion of sentient entities.

> and needs to
> be completely specified at despot seed implementation time, for obvious
> reasons.

Incorrect; strawman argument. One needs an unambiguous reference to
Friendliness at seed implementation, not a complete specification. I can
leave an unambiguous reference to "the truth value of Fermat's Last Theorem"
even if I'm living before Andrew Wiles and I don't know the truth value
personally; the AI will fill in the blanks when its intelligence increases
enough to solve the problem.

> This involves a 1) classification/decision based on a number of
> observed trajectory frames 2) corrective action.

Volition-based Friendliness in an intelligent substrate scenario has the
interesting property that it can be fulfilled by positive obedience and
negative refusals without usually requiring "corrective action".

> Classification need to
> occur in realtime, which is ~10^6 faster than current term of realtime.

I agree that realtime post-Singularity may be much faster than current
realtime. If everyone and everything speeds up by roughly the same amount,
what is the relevance of comparison to our current world? Just because
everything happens instantly from our perspective doesn't mean it happens
any faster from their subjective perspectives. This is a badly structured
argument.

> So
> your despot is distributed, having probably a single instance of the
> warden for each agent watched.

Hard distinctions between groups and individuals are anthropomorphic in such
discussions.

> Gaining decision time by slowing timebase
> for single agent and agent groups is inconsistent, and induces penalties,
> the more so if you want to slow down considerable expanses of reality. The
> warden is considerably adaptively smarter than even the smartest agents
> around, or else it can be outsmarted.

I tend to visualize the mind of the intelligent substrate as being the
largest mind in that civilization, by several orders of magnitude.

> Notice that global synchronization of
> despot instances is impossible even for moderate culture sizes
> (lightminutes to lighthours). Even so, traffic synching despot state will
> eat a considerable fraction of entire traffic.

> Friendliness is not a
> realtime decisable classification, since actions lead to consequences on a
> second, minute, year scale.

Under volition-based Friendliness, actions that violate the volitions of
others should be readily detectable as requiring the manipulation of someone
else's private property. If you want to take an action so incredibly subtle
that the intelligent substrate can't figure out whether it's good for you or
bad for you, it's your own lookout.

> Retrograde actions are impossible (no magical
> physics), so you have to model the world (just that: the entire world,
> probably tesellating it in each despot instance), and to sample the
> consequences of each action which will manifest downstream. The world
> being nonlinear, you can't predict very far.

Perfect prediction of the future might be convenient. I don't see why it
would be necessary.

> Depending on action taken, it
> can be a hard bound on an action (you can't do something when you try)

Yes.

> or
> a soft bound (you never come in the predilection of poking thy neighbour's
> eye out, because even thought crime is impossible thanks to the despot).

EEEWWW.

> Soft bound is a far more computationally intensive to decide, because here
> you will have to sample several trajectories, to compute your corrective
> force on the agent state vector, to guide its future evolution.

Not only computationally intensive, but also pure evil and grossly
unnecessary. And hence, unless you want to make a much stronger case for
it, a pure strawman argument. I would tend to regard this as a scenario
breaker.

> If you want an off-the cuff estimate, given the above I don't see how the
> despot can consume considerably less than 90% of entire available
> resources, plus put an unspecified penalty on fastest available timebase.

Let's suppose this estimate is correct. So what? If the Solar System
contains, say, 10^33 grams of matter and ten-to-the-fifty-something
computing elements under current physical paradigms, is it really that much
of a difference to have a civilization that has 10^51 bytes available for
citizens instead of 10^52? I think this is subjectively more like 2%
overhead than 90% overhead, especially if available resources are expanding;
in optimistic scenarios where resources can be expanded faster than any
reasonable demand, it is not noticeable.

> Apart from above being core propeties of evil incarnate, it is a very
> expensive evil to boot. But, hey, it controls the physical layer, so what
> can you do.

You can build Friendly AIs that don't create intelligent substrate scenarios
if their idiot programmers are *that wrong* about how intelligent substrate
scenarios work.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:13 MST