Re: The Meaning of Life

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Sun Feb 09 1997 - 17:51:22 MST


[John K. Clark:]
> For the purposes of this argument it doesn't matter a bit if reason or
> Evolution CAUSED you to act in a certain way, in both cases they are
> "imposed". If I make a logical decision, that means that out of an infinite
> number of choices I narrowed things down to one particular action.

We aren't arguing determinism here. It seems to me this is about what
we identify with. If I identify with reason and act in accordance with
reason, I am free. Determinism doesn't enter into it; truth does. If
you identify with what you want and act in accordance with what you
want, determinism doesn't enter into it; desire does.

Determinism has no relevance to this sort of freedom - nor are *all*
*systems* possessed of free will, as your self-unpredictability
definition would have it. The question is whether the choices *you*
make - *you* are what you identify with, be it head or heart - are
placed in your value/belief system (internal freedom) and successfully
acted upon (external freedom). The relevance of your definition is that
if you know beforehand what decision you will make, it was probably
pre-defined (irrespective of your decision) and thus imposed.

> So, you have chosen to identify with evolution imposed reason. I don't wish
> to insult you so let me say that I think your choice is reasonable, there
> were reasons, causes, for you to make that decision. I'm not sure if the
> causes were in your genes or your environment and it doesn't matter, they're
> still causes.

I don't care where reason comes from. I don't care if it's
evolution-imposed, blank-slate-deduced, or God-given. All I care about
is if it's true. The great revelation that I reason because I have
evolved to do so falls flat. Reason says to reason also, and if it said
to stop reasoning, I would - uhhh...

/** A ridiculously simplified goal for an emotional architecture. */
class Goal extends CogObject {

    /** The precomputed value of this goal. Used to determine
        fatigue rates, compute subgoal values, and so on. */
    int value;

    /** The goals to which this goal is subgoal; those goals which the
        fulfillment of this goal will advance. */
    Goal[] justification;
    /** The probability that fulfillment of this goal will help fulfill
        each goal above. */
    float[] weights;

    /* Note: Real java'd have some constructors. */

    /** Get the value of this goal. */
    int value () {
       if (value != 0)
          return value;
       for (int i = 0; i < justification.length; i++)
          value += (int) (justification[i].value() * weights[i]);
       return value;
    } // end value

    static final Goal survival = new Goal(700, new Goal[0], new
float[0]);
    static final Goal reproduction = new Goal(400, new Goal[0], new
float[0]);
    static final Goal embarassment = new Goal(-300, new Goal[0], new
float[0]);
    /** You can't create arrays this way in real java. */
    static final Goal dress_properly =
        new Goal({reproduction, embarassment}, {0.32, -0.74});

} // end Goal

> You decide to do things you don't want to do??? Why do you do them?

Because they are justified. I don't intend to play word-games here.
There are cognitively real distinctions between want and decide, and I
am using them. There is a difference between the "value" and
"justification" slots of goals. Under normal circumstances, this rarely
shows up. As a countersphexist, I run into it all the time; highly
justified goals have negative values for no real reason. Now, under
those circumstances, you can decide to follow the value-slots or the
justification-slots. Although my general cognitive systems are tuned to
follow the value, I try to hack it up so I follow the justification.
What I want to do has no relation to what I have decided to do, but I
try to carry on anyway.

Again, you can use my concept of "formal ethical system" to see how this
corresponds to your argument. To you, meaning is value. High-value
goals have high meaning. To me, meaning is justification. Highly
justified goals have high meaning.

> "Want" is an emotion. You don't "want" your emotions assigning positive value
> to things, because that would make you sad, or at least, you think it would
> make you sad. You think the idea of maximizing intrinsic value (whatever that
> is) will make you happy, while maximizing subjective value will not.

Tsk, tsk. You're Johnkclarkomorphizing me. I've decided that emotions
assigning positive value to things is not conducive to reaching
Singularity. Maximizing intrinsic value is the only logical basis for a
goal system, while observer-arbitrary value is nothing to me. No
emotion at all.

Well, not really, because emotions are part of the system that reasons
with respect to values, and if I somehow shut off the entire emotional
system, that would cripple my ability to reason about goals. But I'm
sure you reason about what you "want", and see no conflict there either.

(Re: my "he is/I am" list.)
> I need a little help here, I don't see it.

I identify with the head, you with the heart.
Under the circumstances, our respective philosophies seem a bit ironic.

Here's an interesting question:

What is altruism?
Is it sacrificing your happiness for the happiness of others?
Or is it gaining your happiness through the happiness of others?

I would unconditionally answer #2. Given the way these ironies go, I'd
guess you're #1, though a priori you'd seem likely to be #2.

-- 
         sentience@pobox.com      Eliezer S. Yudkowsky
          http://tezcat.com/~eliezer/singularity.html
           http://tezcat.com/~eliezer/algernon.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:09 MST