Re: Immortality and Personal Finance

From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Wed Apr 24 2002 - 13:33:24 MDT


Eliezer S. Yudkowsky wrote:

> Dan Fabulich wrote:
> >
> > So the question is: does my personal financial contribution to
> > immortality research probably matter quite a lot? Or does it probably
> > matter fairly little?
>
> This is an interesting question because I'm not entirely sure
> whether it's being asked from an altruistic perspective, a purely
> selfish perspective, a selfish-discount perspective, or a
> selfish-split perspective.

> [...]

> If you don't think you can have any significant personal impact on
> the future and you are purely selfish, then you should just spend
> all your money on increasing your chance of personal survival.

Your points are good, but I think they (to a great extent)
misunderstood my question. I think the question you were answering
was: "Should I promote my *own* likelihood of survival, or should I
try to bring about the Singularity?"

But I'm not 100% confident that *either* goal is attainable w/in my
lifetime, or, for that matter, at all. If I could assume that human
civilization would continue for an arbitrarily long amount of time,
I'd be more confident that the Singularity would happen *someday*;
that *some* generation of human beings will survive. But I'm not sure
I can even count on that.

What I'm responding to is the widespread notion that, since the
Singularity is a fantastical notion and immortality research is
nothing more than a pipe dream in a vacuum, I should plan my life as
if I'm going to die. That means, in part, that I should try to create
meaning in the brief few decades available to me; make sure that I'm
adequately happy now (because I won't be able to postpone my
satisfaction for even 120 years) and make life better for people I
care about in other less lofty ways.

> Most people mix selfishness and altruism.
>
> [...]
>
> However, the most common real algorithm is the "selfish split" - spend some
> of your effort on yourself, and some of your effort on others. This can't
> easily be mapped onto normative goal reasoning with a desirability metric
> for possible futures, but it does make easy intuitive sense to a human with
> a day planner...

In fact, I think I can suggest a plausible reason as to why the
selfish-split is such a natural notion for ordinary people with a
dayplanner: reaching a large selfish-split is basically rational
behavior for people who believe that every human generation will die.

First, notice that even a person who believes that they should be
purely altruistic will only reduce their own personal satisfaction to
the point where their altruistic productivity is maximal; miserable
people, overworked people, and people with no hope of personal gain
tend not to accomplish as much as those who are basically happy and/or
hopeful about their own well-being, all else being equal.

Second, suppose [as I've read, but don't have data right this second,
but ask me later] that one's hopes/expectations for the future are the
among the most significant corelatives with *present* personal
satisfaction. Suppose that this relationship is causal; I doubt that
this can be proved (since present happiness probably also influences
us to be optimistic), but it seems plausible to me.

That means that altruistic people who think they can make it to
Singularity, who think they can, therefore, postpone their personal
satisfaction until after Singularity, will tend to be considerably
happier *now* (all else being equal) than people who think that every
generation of humans will eventually die, because Singularitarians
have very high expectations for their future and the future of those
they care about.

But saying that Singularitarians would be happier now *all else being
equal* implies that one of the factors being kept equal would be work
spent on present personal satisfaction. That means that altruistic
Singularitarians can take fewer actions towards their own current
personal satisfaction than altruistic fatalists *and be just as
happy*.

That suggests to me that Singularitarians, all else being equal,
should be able to trade off more present personal satisfaction than
fatalists and still be at their own maximum altruistic productivity.

Note: Although altruistic Singularitarians wouldn't need to work as
much to promote their own present happiness, they would still have to
work some; I'd expect they'd still have to work pretty hard on their
own self-interest to maintain their highest levels of productivity
[when compared with, say, an AI].

You might see this as a convincing argument as to why you should be a
Singularitarian; I think this overlooks something important, however.
If the Singularitarian is wrong, and the fatalist is correct, then the
Singularitarians are very productively spinning their own wheels:
they'll work very hard, harder than analogous fatalists would have
worked, but none of the work that Singularitarians will have done will
actually have accomplished anything good for themselves or anyone
else.

Furthermore, as they spent all their time imagining that they could
postpone their own happiness to later, their lives will have been
qualitatively less happy, less fulfilled, and less worthwhile.

[Think how frustrating it would be to know that you've spent all that
time working on CATAI and now GISAI/CFAI, only to learn that in 2003
human civilization and all your work would be destroyed in nuclear
war.]

Also note that we should expect our basic evolutionary strategies to
be fatalistic; fatalism has been basically correct for the entire
history of life on Earth, so planning like a fatalist probably tended
to work out best for our ancestors. Of course, the productivity gain
I mentioned above adds yet another evolutionary pressure for
double-think: our ancestors who thought that they'd be very well-off,
but planned like they wouldn't, probably tended to be better off.

That brings me back to my original question, which you may now
understand better: do I stand a good chance of influencing the
Singularity, or a slim chance? If I stand a slim chance of affecting
the probability or safety of the Singularity, then I should plan my
life like a fatalist: if Singularity happens, great, but if not, I've
planned ahead. If I DO stand a good chance of influencing the
Singularity, then I should plan my life in a radically different way.
(Of course, at the end of the day, there are degrees, and I'll
probably be adopting some fraction of both strategies, but I still
think it's worth asking whether I need to be mostly fatalist, mostly
optimistic-Singularitarian, or something else entirely.)

So I keep on wondering: which is it? Will I personally matter a lot?
Or a little?

-Dan

      -unless you love someone-
    -nothing else makes any sense-
           e.e. cummings



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:38 MST