From: den Otter (neosapient@geocities.com)
Date: Fri Sep 17 1999 - 17:22:44 MDT
----------
> From: Eliezer S. Yudkowsky <sentience@pobox.com>
> > Emotions may be "arbitrary evolved adaptations", but they're
> > also *the* meaning of life (or: give life meaning, but that's
> > essentially the same thing). That's how we humans work.
>
> I disagree. Let's suppose I accept your basic logic, as a premise for
> discussion. Fine. Emotions are still only part of the mind, not all of
> it. For as long as I continue doing things, making choices, instead of
> committing suicide, then my life is as meaningful as yours.
Not necessarily; I'd argue that meaning is directly linked to the
intensity of "emotions" (or "motivations" if you will). There's a big
difference between having just enough fear of death not to kill
yourself, and actually enjoying life, for example. Obviously it may not
matter to the outside world, as long as you do what you're supposed to
do etc., but it matters a lot on a personal level.
> It doesn't
> matter whether the particular set of cognitive contents that causes me
> to eat a slice of pizza, or the cognitive contents that make me happy as
> a result, are in the cortex or the limbic system - whether they're
> built-in "emotions" or software "thoughts".
You're right, it doesn't matter. It's the result that counts.
> > The only reason why logic matters is because it can be a useful
> > tool to achieve "random" emotional goals. Rationality is practical
> > ...in an emotional context. It has no inherent meaning or value.
> > NOTHING has. Even the most superb SI would (by definition) be
> > _meaningless_ if there wasn't someone or something to *appreciate*
> > it. Value is observer-dependent. Subjective.
>
> Okay, suppose I accept this. I still don't see why any SI will
> automatically drop off all its emotions *except* selfishness and then
> you think this is a good thing. That's really where you lost me.
Well, unless suicide is the big meaning of life (and the SI actually
gives a hoot about such things), it will need to retain
self-preservation in its mental structure. You need to be alive to do
things. I'm not saying that the SI will drop "all of its emotions", btw.
More likely it would modify them, and/or get rid of some, and/or add new
ones.Or perhaps it *would* get rid of them, but that's just one of many
options.
> > At least, that's what I suspect. I could be wrong, but I seriously
> > doubt it. So what's the logical thing to do? Risk my life because
> > of some emotional whim ("must find the truth")? No, obviously not.
>
> But why not? There's one emotional whim, "must find the truth". And
> another emotional whim, "stay alive". And another, "be happy". In me
> the first one is stronger than the other two. In you the two are
> stronger than the one. I don't get the part where your way is more
> rational than mine, especially if all this is observer-dependent.
What it all comes down to is scoring "satisfaction points", correct?
That's what drives us on. Even you. You set more or less random
goals, and reaching those goals will give you satisfaction (or remove
frustration) and thus generate points. Ok, now that we've determined
that we're in the same race, we can compare strategies. Some
strategies will get you more points than others. For you, finding the
truth (or creating a SI which may do that) is obviously worth a lot of
points, but, and this is a key issue, it isn't the *only* thing that
is worth points (to you). Other, more "mundane" activities, like
watching _Buffy the Vampire Slayer_, are also a source of points.
If all goals are essentially equal (in that they are ways to get
satisfaction/reduce discomfort), then the logical thing is to
pick a strategy that will earn you maximal emotional currency.
Short but dangerous thrills like smoking Crack cocaine or
spawning SIs may get you a lot of points at once, but then
you die and that's it, no more points. If you settle (temporarily)
for lesser pleasures (less points), but survive (much) longer,
you will ultimately gain a lot more points, perhaps even
an infinite amount (the happy god scenario). You win!
To recap, the best thing for you would be to drop your
relatively dangerous goal of creating an ASI, and compensate
for the loss of (anticipation-fun) points by concentrating on
other fun stuff, and/or modifying your mental structure (if
possible) so that "finding the truth" is linked to uploading
and living forever. This new memeplex could fill the void left
by the previous one ("Singularity at all costs").
I don't have any "grand" goals myself, really. Ok, so of course
I want to become immortal, god-like, explore all of reality and
beyond etc. etc., but that's hardly the meaning of life. I don't
really care about the meaning of life. Maybe it exists, maybe
not. Maybe I'll find it, maybe not. Who cares; I've done just fine
without it so far, and I don't see why this would have to change
in the future.
> I particularly don't get the part where an SI converges to your point of
> view. And if it doesn't converge, why not build an SI whose innate
> desires are to make you happy, and everyone else who helped out on the
> project happy, and everyone on the planet happy, thus hopscotching the
> whole uploading technological impossibility and guaranteeing yourself a
> much larger support base?
Is this a joke? I mean, *this* coming from *you*?
> Okay, now I spot a level confusion. First you say "the rational thing
> to do" - that is, the rational way to make choices - and then you follow
> it up with the words "meaning of life". Aren't these the same thing?
> Either you have an emotion impelling you to do rational things, or you
> don't. If you don't, why are you rationally serving any other emotions?
> It's not clear to me what kind of cognitive architecture you're
> proposing. However an AI makes choices, that, to you, I think, is its
> "meaning of life". Same thing goes for humans.
See above. Ultimately goals are about avoiding "bad" feelings and
generating good ones. Punishment & reward. Reason is just a tool
to get more reward than punishment. We should always keep this
in mind when selecting "random" goals.
> > [*] Actually, as I've pointed out before, uncertainty is
> > "eternal"; you can never, for example, know 100% sure that
> > killing yourself is the "right" thing to do, even if you're
> > a SI^9. Likewise, the nonexistence of (a) God will never be
> > proven conclusively, or that our "reality" isn't some superior
> > entity's pet simulation etc. You can be "fairly sure", but
> > never *completely* sure. This isn't "defeatism", but pure,
> > hard logic. But that aside.
>
> And I repeat: "You know this for certain?" But that aside.
I'm "pretty sure", but of course I can never be "completely sure"
either. It doesn't matter; I can't really lose with my approach. I'd
have forever to decide what to do next. You on the other hand
are betting everything on a force that, once released, is totally
beyond your control. My way (careful, gradual uploading) is
better because it would allow me to have some measure of
control over the situation. Having control increases the
chances of achieving your goals, whatever they may be.
> > So stay alive, evolve and be happy. Don't get your ass killed
> > because of some silly chimera. Once you've abolished suffering
> > and gained immortality, you have forever to find out whether
> > there is such a thing as an "external(ist)" meaning of life.
> > There's no rush.
>
> I'll certainly agree with that part. Like I said, the part I don't get
> is where you object to an AI.
The AI may decide that it doesn't want me around. The AI is
only useful from *my* pov if it helps me to transcend, and I
don't think that's very likely; I'm probably just a worthless bug
from *its* pov.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:11 MST