Re: >H: The next 100 months

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Oct 18 1999 - 20:35:57 MDT


Sayke@aol.com wrote:
>
> In a message dated 10/18/99 9:22:53 AM Pacific Daylight Time,
> sentience@pobox.com writes:
>
> 1:30 pm est: eliezer climbs up onto the the roof of the singularity
> institute's main building, pulls a large optic taser out of a suitcase, plugs
> it into the wall, checks the backup batteries, and sits down on an airvent to
> await... whatever.

It's entirely irrelevant what I do after this point. Or at least,
that's the idea. I would simply wait, just like everyone else... And
will it ever feel good to be able to say that.

If nothing happens after the Singularity, it depends on why nothing
happened. In the generic Zone Barrier instance where the AI just
vanishes from reality, I would probably start concentrating on ensuring
that the invention of nanotechnology results in an immediate
extraterrestrial diaspora. (It wouldn't work; I would still try.) But
I'm fairly sure that won't happen; if civilizations do that sort of
thing, the Culture's Contact Section would be here by now.

Point being: If nothing happens after the Singularity, or something
weird occurs which requires re-planning or some other action on my part,
then I'll deal. I try not to form emotional dependencies on my analyses
of the future. The Singularity is simply the obvious thing to do at
this point in time, pretty much regardless of what your ultimate goals
are. If that doesn't work, I'll recalculate my goal system and do
something else. I'm not immune to shock, but I can usually recover
completely within, oh, five seconds or so. If the AI vanishes, I'll
deal. If the world turns out to be a computer simulation and the Layer
People want a word with me, I'll deal.

One of the nice things about having a mathematical formulation of your
personal philosophy is that there isn't any term for "shock value". If
the Singularity fizzles in any number of ways, I'll choose the next most
obvious thing to do and do it. It's just that the current "next thing"
is two orders of magnitude less attractive, and I really don't expect
the Singularity to fizzle, so I see no real need to plan ahead for it.
Call it a contingency-future discount factor of 100:1. It's not
negligible, like the contingency of winning the lottery or discovering
the Chocolate Asteroid; but it's not a very important set of futures
either, especially when you consider how widely the preparation actions
are scattered across the set.

> 1:55 pm est: eliezer stands on an air vent casing, looks around, for one
> last time, at the placid cityscape surrounding him, and starts busting high
> voltage caps into the air and shouting "hey, elisson! i wanna know the
> meaning of life, the universe, and everything!!!"

I think you've managed to misunderstand my motives completely.

> 1:57 pm est: although elisson notices its surroundings almost
> immediately, it takes a short time for it to realize that the ant on the roof
> is its creator. its decision-making process is something vaguely like the
> following: "a monkey is discharging a monkey weapon on the roof. it might do
> something bad with that. no, there is no way it can damage me with that. this
> monkey seems to be one of my primary creators. its asking me questions. it is
> not necessary to answer its questions. cells 0x9e83fa823 through 0x9e83fc907,
> disassemble the monkey."

Oh, please! A Power isn't a super-AI any more than it's a super-human.

> 1:58 pm est: on the roof, the wind picks up, and eliezer notices the dust
> rise from the ground like a fractal wave of soot, and opens his arms in
> welcome. elisson, like a sandblaster, embraces him. eliezer ceases to exist
> in a sheet of black razorblade snowflakes.

Am I supposed to be shocked by this scenario? You don't want to know
what I would consider a bad end.

There are two problems with trying to shock me this way. First, unlike
you and den Otter, I suffer from no illusion that the world is fair.
You believe, because it is implicit in the human model of the world,
that every risk can be ameliorated by your actions. You'll choose a
path that's far more dangerous than mine in absolute terms, simply
because it allows you to "do something", or believe you're doing
something, about the risk that you'll be destroyed by AIs. I choose the
path with the best absolute probability, even if it isn't as emotionally
satisfying, even if it contains risks I admit to myself that I can't
affect, because the next best alternative is an order of magnitude less attractive.

If Powers don't like mortals, then mortal life is doomed and there's
nothing we can do about it - whether we're humans or AIs or neurohacks
or augments or telepaths or hybrids or anything else on this side of the
line - that doesn't involve such enormous risks that we'd have to be
a-priori certain that the Powers would kill us before it would be
survival-rational to do anything but steer for a Singularity.

Second, in a case like this, I would have to evaluate whether I wanted
my ultimate goal to be survival. I don't really have to do that
evaluation now, because the Singularity is intuitively obvious as the
thing to do next. Which is good, because I don't really trust
philosophy, even my own; I do, however, trust certain kinds of
intuitions. Nonetheless, if my choice of actions became dependent on
philosophy, personal survival wouldn't be my first pick as priority goal.

> pardon my rehash of what seems obvious, but isnt suicide bad?

Prove it. I don't trust philosophy, but I trust the limbic system even less.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:33 MST