Re: making microsingularities

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue May 29 2001 - 00:05:21 MDT


Samantha Atkins wrote:
>
> You might be right in your cogitations. But a hell of a lot of
> real world people, extropians among them, are not comfortable
> with the idea of the SysOp coming and saving us all in the short
> term or with the SysOp as "The Answer". It is a little too pat
> and fraught with danger.

The original question was not whether a Slow Singularity was desirable,
but whether a hard takeoff was possible. Suppose a hard takeoff is
possible. Would a deliberate Slow Singularity really help?

I don't see that a Slow Singularity significantly improves the situation
except insofar as the programmers have longer to play with the near-human
version of the AI, and given good Friendship structure this should not be
necessary. It would still, perhaps, be nice, but the danger associated
with having a potentially-transcendent-but-deliberately-slowed-down AI in
your basement gives me the serious heebie-jeebies. I don't think you can
prevent the news from going public. Only if the originating project is
seriously unprepared is it a necessary risk to slow down, go public, and
endure the subsequent panic.

Besides which, a lot of the risk of unFriendly AI is risk such that the
transcending AI will *refuse* to slow down so that you can tinker with
it. Thus, a deliberately slow Singularity does not necessarily decrease
real risk in any way at all. There are whole families of serious risks
that result from a deliberately slow Singularity; if all the Friendly AIs
agree to slow down, then it's only a matter of time before someone
accidentally or deliberately creates an unFriendly AI that refuses to slow
down. This very argument is likely to lead to a smart Friendly AI
refusing to slow down.

Remember also that if an AI goes public, then what happens next will be
partially determined by public policy. In our current society,
policymakers will simply procrastinate forever. "I'll do it tomorrow" is
not a decision to do it tomorrow - it is a decision to never, ever do it,
because tomorrow you'll just say "I'll do it tomorrow" again.

Eliezer wrote (in "Creating Friendly AI"):
>
> The
> first self-modifying transhuman AI will have, at least in potential,
> nearly absolute physical power over our world. The *potential*
> existence of this absolute power is unavoidable; it's a direct
> consequence of the maximum potential speed of
> self-improvement.
>
> The question then becomes to what extent a Friendly AI would
> choose to realize this potential, for how long, and why. At the
> end of GISAI 1.1: Seed AI, it says:
>
> "My ultimate purpose in creating transhuman AI is
> to create a Transition Guide; an entity that can
> safely develop nanotechnology and any subsequent
> ultratechnologies that may be possible, use
> transhuman Friendliness to see what comes next,
> and use those ultratechnologies to see humanity
> safely through to whatever life is like on the other
> side of the Singularity."
>
> Some people reflexively assert that no really Friendly AI would
> choose to acquire that level of physical power, even
> temporarily - or even that a Friendly AI would never decide to
> acquire significantly more power than nearby entities. I think
> these people are unconsciously equating the possession of
> absolute physical power with the exercise of absolute social
> power in a pattern following a humanlike dictatorship; the latter,
> at least, is definitely unFriendly, but it does not follow from the
> former. Logically, an entity might possess absolute physical
> power and yet refuse to exercise it at all, in which case such an
> entity would be effectively nonexistent to us. More practically,
> an entity might possess unlimited power but still not exercise it
> in any way we would find obnoxious.
>
> Among humans, the only practical way to maximize actual
> freedom (the percentage of actions executed without
> interference) is to ensure that no human entity has the ability to
> interfere with you, because humans have an innate, evolved
> tendency to abuse power. Thus, a lot of our ethical guidelines
> (especially the ones we've come up with in the twentieth
> century) state that it's wrong to acquire too much power.
>
> If this is one of those things that simply doesn't apply in the
> spaces beyond the Singularity - if, having no evolved tendency
> to abuse power, no ethical injunction against the accumulation
> of power is necessary - one of the possible resolutions of the
> Singularity would be the Sysop Scenario.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:49 MST