Re: Immortality and Personal Finance

From: I William Wiser (will@wiserlife.com)
Date: Wed Apr 24 2002 - 17:08:42 MDT


I don't see many differences in the actions I think make sense at
this time if I will live a billion years or only ninty. I don't see much
difference in what I would do if my contributions to life extension
are large or small. I don't see much difference between selfish
and altruistic motivations. In all of these cases living long and well
is a key goal and my task is to figure out what it takes to survive
increase my knowledge, improve my capability and enjoy myself.

In a few books I have read on happiness, absorbing work you
consider worthwhile is one of the most enjoyable ways to spend
your time. That matches my personal experience and what I hear
from friends. Health and happiness are also interrelated. Good
relationships contribute to happiness and productivity. Many of
the things that lead to personal effectiveness and enjoyment also
put you in the position to help others. Whether or not my contributions
turn out to be significant I enjoy spending my time working on the
things I think are most important.

>From my perspective recreation is not more fun than work but rather
a change of pace. Rest and play are valuable but as much for the
increases in productivity as for their own sake. I often derive as much
or more pleasure from practical expenditures or investments as from
frivolous purchases. Most of the hedonism that tempts me is very short
lived. Looking over the grand perspective of several weeks I usually
find the decisions are the same as I would make over the perspective
of decades. I find almost no differences between what seems wise over
decades and what seems wise over millennia.

Regarding finances in particular, compound interest is a big deal. The
more you can invest (in businesses, other people, your own improvement,
etc.) and the sooner you invest it the better. Try to avoid being picky or
feeling deprived, etc. but always think of the way you spend your time and
money as an investment, until you believe you have more time and money
than you have any use for, and you probably will not go wrong. Balance,
not letting any key resource fall to low, etc.

Your situation and assumptions may be different but try making plans given
different base assumptions and see if the behaviors you think make sense
vary much. If they don't then which assumptions you use are less important.
I do think there are unknowns which make a big difference, and I do think it
is possible to do some wasteful things with ones life but if you optimize
for all
the scenarios you think likely you may find common solutions.

-Will

----- Original Message -----
From: "Dan Fabulich" <daniel.fabulich@yale.edu>
To: <extropians@extropy.org>
Sent: Wednesday, April 24, 2002 12:33 PM
Subject: Re: Immortality and Personal Finance

> Eliezer S. Yudkowsky wrote:
>
> > Dan Fabulich wrote:
> > >
> > > So the question is: does my personal financial contribution to
> > > immortality research probably matter quite a lot? Or does it probably
> > > matter fairly little?
> >
> > This is an interesting question because I'm not entirely sure
> > whether it's being asked from an altruistic perspective, a purely
> > selfish perspective, a selfish-discount perspective, or a
> > selfish-split perspective.
>
> > [...]
>
> > If you don't think you can have any significant personal impact on
> > the future and you are purely selfish, then you should just spend
> > all your money on increasing your chance of personal survival.
>
> Your points are good, but I think they (to a great extent)
> misunderstood my question. I think the question you were answering
> was: "Should I promote my *own* likelihood of survival, or should I
> try to bring about the Singularity?"
>
> But I'm not 100% confident that *either* goal is attainable w/in my
> lifetime, or, for that matter, at all. If I could assume that human
> civilization would continue for an arbitrarily long amount of time,
> I'd be more confident that the Singularity would happen *someday*;
> that *some* generation of human beings will survive. But I'm not sure
> I can even count on that.
>
> What I'm responding to is the widespread notion that, since the
> Singularity is a fantastical notion and immortality research is
> nothing more than a pipe dream in a vacuum, I should plan my life as
> if I'm going to die. That means, in part, that I should try to create
> meaning in the brief few decades available to me; make sure that I'm
> adequately happy now (because I won't be able to postpone my
> satisfaction for even 120 years) and make life better for people I
> care about in other less lofty ways.
>
> > Most people mix selfishness and altruism.
> >
> > [...]
> >
> > However, the most common real algorithm is the "selfish split" - spend
some
> > of your effort on yourself, and some of your effort on others. This
can't
> > easily be mapped onto normative goal reasoning with a desirability
metric
> > for possible futures, but it does make easy intuitive sense to a human
with
> > a day planner...
>
> In fact, I think I can suggest a plausible reason as to why the
> selfish-split is such a natural notion for ordinary people with a
> dayplanner: reaching a large selfish-split is basically rational
> behavior for people who believe that every human generation will die.
>
> First, notice that even a person who believes that they should be
> purely altruistic will only reduce their own personal satisfaction to
> the point where their altruistic productivity is maximal; miserable
> people, overworked people, and people with no hope of personal gain
> tend not to accomplish as much as those who are basically happy and/or
> hopeful about their own well-being, all else being equal.
>
> Second, suppose [as I've read, but don't have data right this second,
> but ask me later] that one's hopes/expectations for the future are the
> among the most significant corelatives with *present* personal
> satisfaction. Suppose that this relationship is causal; I doubt that
> this can be proved (since present happiness probably also influences
> us to be optimistic), but it seems plausible to me.
>
> That means that altruistic people who think they can make it to
> Singularity, who think they can, therefore, postpone their personal
> satisfaction until after Singularity, will tend to be considerably
> happier *now* (all else being equal) than people who think that every
> generation of humans will eventually die, because Singularitarians
> have very high expectations for their future and the future of those
> they care about.
>
> But saying that Singularitarians would be happier now *all else being
> equal* implies that one of the factors being kept equal would be work
> spent on present personal satisfaction. That means that altruistic
> Singularitarians can take fewer actions towards their own current
> personal satisfaction than altruistic fatalists *and be just as
> happy*.
>
> That suggests to me that Singularitarians, all else being equal,
> should be able to trade off more present personal satisfaction than
> fatalists and still be at their own maximum altruistic productivity.
>
> Note: Although altruistic Singularitarians wouldn't need to work as
> much to promote their own present happiness, they would still have to
> work some; I'd expect they'd still have to work pretty hard on their
> own self-interest to maintain their highest levels of productivity
> [when compared with, say, an AI].
>
> You might see this as a convincing argument as to why you should be a
> Singularitarian; I think this overlooks something important, however.
> If the Singularitarian is wrong, and the fatalist is correct, then the
> Singularitarians are very productively spinning their own wheels:
> they'll work very hard, harder than analogous fatalists would have
> worked, but none of the work that Singularitarians will have done will
> actually have accomplished anything good for themselves or anyone
> else.
>
> Furthermore, as they spent all their time imagining that they could
> postpone their own happiness to later, their lives will have been
> qualitatively less happy, less fulfilled, and less worthwhile.
>
> [Think how frustrating it would be to know that you've spent all that
> time working on CATAI and now GISAI/CFAI, only to learn that in 2003
> human civilization and all your work would be destroyed in nuclear
> war.]
>
> Also note that we should expect our basic evolutionary strategies to
> be fatalistic; fatalism has been basically correct for the entire
> history of life on Earth, so planning like a fatalist probably tended
> to work out best for our ancestors. Of course, the productivity gain
> I mentioned above adds yet another evolutionary pressure for
> double-think: our ancestors who thought that they'd be very well-off,
> but planned like they wouldn't, probably tended to be better off.
>
> That brings me back to my original question, which you may now
> understand better: do I stand a good chance of influencing the
> Singularity, or a slim chance? If I stand a slim chance of affecting
> the probability or safety of the Singularity, then I should plan my
> life like a fatalist: if Singularity happens, great, but if not, I've
> planned ahead. If I DO stand a good chance of influencing the
> Singularity, then I should plan my life in a radically different way.
> (Of course, at the end of the day, there are degrees, and I'll
> probably be adopting some fraction of both strategies, but I still
> think it's worth asking whether I need to be mostly fatalist, mostly
> optimistic-Singularitarian, or something else entirely.)
>
> So I keep on wondering: which is it? Will I personally matter a lot?
> Or a little?
>
> -Dan
>
> -unless you love someone-
> -nothing else makes any sense-
> e.e. cummings
>
>
>



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:38 MST