Re: Artilects & stuff

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Sep 15 1999 - 18:23:06 MDT


den Otter wrote:
>
> Ok, let me explain...again.

Please do. I think your position is very peculiar; you've gotten
through most of the logic of causality and then, as far as I can tell,
stopped halfway.

> Emotions may be "arbitrary evolved adaptations", but they're
> also *the* meaning of life (or: give life meaning, but that's
> essentially the same thing). That's how we humans work.

I disagree. Let's suppose I accept your basic logic, as a premise for
discussion. Fine. Emotions are still only part of the mind, not all of
it. For as long as I continue doing things, making choices, instead of
committing suicide, then my life is as meaningful as yours. It doesn't
matter whether the particular set of cognitive contents that causes me
to eat a slice of pizza, or the cognitive contents that make me happy as
a result, are in the cortex or the limbic system - whether they're
built-in "emotions" or software "thoughts". How can you draw a hard
line between the two? They're both made of neurons.

> The only reason why logic matters is because it can be a useful
> tool to achieve "random" emotional goals. Rationality is practical
> ...in an emotional context. It has no inherent meaning or value.
> NOTHING has. Even the most superb SI would (by definition) be
> _meaningless_ if there wasn't someone or something to *appreciate*
> it. Value is observer-dependent. Subjective.

Okay, suppose I accept this. I still don't see why any SI will
automatically drop off all its emotions *except* selfishness and then
you think this is a good thing. That's really where you lost me.

> At least, that's what I suspect. I could be wrong, but I seriously
> doubt it. So what's the logical thing to do? Risk my life because
> of some emotional whim ("must find the truth")? No, obviously not.

But why not? There's one emotional whim, "must find the truth". And
another emotional whim, "stay alive". And another, "be happy". In me
the first one is stronger than the other two. In you the two are
stronger than the one. I don't get the part where your way is more
rational than mine, especially if all this is observer-dependent.

I particularly don't get the part where an SI converges to your point of
view. And if it doesn't converge, why not build an SI whose innate
desires are to make you happy, and everyone else who helped out on the
project happy, and everyone on the planet happy, thus hopscotching the
whole uploading technological impossibility and guaranteeing yourself a
much larger support base?

> The rational thing to do is to stay alive indefinitely, sticking
> to the default meaning of life (pleasant emotions) until, if ever [*]
> , you find something better. So maybe you'll just lead a "meaningful",
> happy life for all eternity. How horrible!

Okay, now I spot a level confusion. First you say "the rational thing
to do" - that is, the rational way to make choices - and then you follow
it up with the words "meaning of life". Aren't these the same thing?
Either you have an emotion impelling you to do rational things, or you
don't. If you don't, why are you rationally serving any other emotions?
 It's not clear to me what kind of cognitive architecture you're
proposing. However an AI makes choices, that, to you, I think, is its
"meaning of life". Same thing goes for humans.

Trying to divide this "meaning of life" into a set of declarative
"goals" and a set of procedural "rationality" strikes as being
artificial and subject to all sorts of challenge. Maybe the
"rationality" is another set of goals - "Goal: If A and A implies B,
conclude B." - and the new procedural meta-rationality is "follow all
these goal rules". Like taking a Turing machine with "rational
cognition" and "goals", and turning the state transition diagram into
goals, and making the new rational rules the Universal Turing machine
rules. Has anything really changed? No. The "meaning of life" is the
entire causal matrix that produces choices and behaviors; you can't
really subdivide it any further. The only reason we call something a
"goal" in the first place is because it produces goal-seeking behavior.
There's nothing magical about the LISP atom labeled "goal" that turns it
into an actual goal.

> [*] Actually, as I've pointed out before, uncertainty is
> "eternal"; you can never, for example, know 100% sure that
> killing yourself is the "right" thing to do, even if you're
> a SI^9. Likewise, the nonexistence of (a) God will never be
> proven conclusively, or that our "reality" isn't some superior
> entity's pet simulation etc. You can be "fairly sure", but
> never *completely* sure. This isn't "defeatism", but pure,
> hard logic. But that aside.

And I repeat: "You know this for certain?" But that aside.

> So stay alive, evolve and be happy. Don't get your ass killed
> because of some silly chimera. Once you've abolished suffering
> and gained immortality, you have forever to find out whether
> there is such a thing as an "external(ist)" meaning of life.
> There's no rush.

I'll certainly agree with that part. Like I said, the part I don't get
is where you object to an AI. And I also don't entirely understand what
you think the difference is between human and AI cognitive architectures
that makes humans so much more stable and trustworthy. Do you have any
idea what kind of muck is down there?

> Or:
>
> #1
> Your goal of creating superhuman AI and causing a
> Singularity is worth 10 emotional points. You get killed
> by the Singularity, so you have 10 points (+any previously
> earned points, obviously) total. A finite amount.
>
> #2
> You upload, transcend and live forever. You gain an infinite
> amount of points.
>
> Who wins?

Nobody, unless you say that "getting points" is the objective, provable,
rational meaning of life, in which case obviously the rational thing to
do is to create a superhuman AI, which will naturally follow the same
logic and attempt to earn an infinite number of points.

Or:

#1
"Your goal of creating superhuman AI and causing a
Singularity is worth 10 emotional points. You get killed
by the Singularity, so you have 10 points (+any previously
earned points, obviously) total. A finite amount."

This scenario is worth 20 emotional points to me.

#2
"You upload, transcend and live forever. You gain an infinite
amount of points."

This scenario is worth 5 emotional points to me.

Who wins?

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:10 MST