From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Apr 24 2002 - 15:39:44 MDT
Dan Fabulich wrote:
>
> Eliezer S. Yudkowsky wrote:
>
> > Dan Fabulich wrote:
> > >
> > > So the question is: does my personal financial contribution to
> > > immortality research probably matter quite a lot? Or does it probably
> > > matter fairly little?
> >
> > This is an interesting question because I'm not entirely sure
> > whether it's being asked from an altruistic perspective, a purely
> > selfish perspective, a selfish-discount perspective, or a
> > selfish-split perspective.
>
> Your points are good, but I think they (to a great extent)
> misunderstood my question. I think the question you were answering
> was: "Should I promote my *own* likelihood of survival, or should I
> try to bring about the Singularity?"
>
> But I'm not 100% confident that *either* goal is attainable w/in my
> lifetime, or, for that matter, at all. If I could assume that human
> civilization would continue for an arbitrarily long amount of time,
> I'd be more confident that the Singularity would happen *someday*;
> that *some* generation of human beings will survive. But I'm not sure
> I can even count on that.
Well, from an altruistic perspective, I don't think that the absolute
confidence matters. What matters is not whether a Singularity is 95% likely
or 5% likely, but whether your personal efforts can bump that up to 95.001%
or 5.001%. From an altruistic perspective this is still playing for stakes
overwhelmingly higher than your own life; 0.001% ~ 60,000 lives. (1 day ~
150,000 lives.)
> What I'm responding to is the widespread notion that, since the
> Singularity is a fantastical notion and immortality research is
> nothing more than a pipe dream in a vacuum, I should plan my life as
> if I'm going to die.
If you believe that the Singularity is a fantastical notion and immortality
research is a pipe dream, and if you furthermore believe that (a) no
existential risks exist, (b) you cannot influence existential risks, or (c)
you discount all lives except those that presently exist, then you should
plan your life for yourself and the people around you. But if you believe
that the Singularity is *real* but *threatened*, you should try to rescue
it.
> That means, in part, that I should try to create
> meaning in the brief few decades available to me; make sure that I'm
> adequately happy now (because I won't be able to postpone my
> satisfaction for even 120 years) and make life better for people I
> care about in other less lofty ways.
This question mixes the issues of Singularity and leverage. You might work
toward a Singularity because you believe that you have enough leverage on
the Singularity to be worth more than your local contribution to world
happiness. Or you might work toward a Singularity because you are striving
to protect humanity's vast future, containing (*)illions of sentient beings,
in which case your one-six-billionth portion of this enormous future is
worth far more than your local contribution to immediate world happiness.
> > However, the most common real algorithm is the "selfish split" - spend some
> > of your effort on yourself, and some of your effort on others. This can't
> > easily be mapped onto normative goal reasoning with a desirability metric
> > for possible futures, but it does make easy intuitive sense to a human with
> > a day planner...
>
> In fact, I think I can suggest a plausible reason as to why the
> selfish-split is such a natural notion for ordinary people with a
> dayplanner: reaching a large selfish-split is basically rational
> behavior for people who believe that every human generation will die.
They must also believe that there is no unique critical point, coming up in
the immediate future, that must be handled to permit the existence of future
generations.
> First, notice that even a person who believes that they should be
> purely altruistic will only reduce their own personal satisfaction to
> the point where their altruistic productivity is maximal; miserable
> people, overworked people, and people with no hope of personal gain
> tend not to accomplish as much as those who are basically happy and/or
> hopeful about their own well-being, all else being equal.
Well, as a Friendly AI thinker, I do care about the integrity of my altruism
and not just my total altruistic output - I need the maximum possible amount
of data on what altruistic integrity looks like. So I would not compromise
cognitive altruism to increase productivity. This also reflects the belief,
on my part, that you can get farther by continuing without compromising than
by taking the short-term benefits of compromising at any given point; the
compromise may provide an easy way to move forward temporarily but it blocks
further progress.
In practice, this means that I try to minimize my attachment to personal
gain. I try to, whenever I imagine a conflict of interest, imagine myself
doing the altruistic thing. It doesn't matter whether this hurts
productivity; it's necessary to maintain cognitive integrity.
I try not to overwork myself, and maybe I try too hard and don't get as much
work done as I could, but I prefer to err on the side of caution.
I don't think I'm immediately miserable, and I certainly don't try to make
myself miserable, but I also pass up on certain kinds of happiness - forms
of fun that I think involve unnecessary risk or consume more in time than
they pay back in energy.
I think that other people can and should sculpt themselves into thinking
like this, but I wouldn't want people to burn themselves out trying to do it
too fast. (Under a year is probably too fast.)
> But saying that Singularitarians would be happier now *all else being
> equal* implies that one of the factors being kept equal would be work
> spent on present personal satisfaction. That means that altruistic
> Singularitarians can take fewer actions towards their own current
> personal satisfaction than altruistic fatalists *and be just as
> happy*.
I'm not sure this is factually correct, and in any case it strikes me as a
profoundly wrong reason to be a Singularitarian. Maybe religious people are
genuinely more happy than Extropians. But I'm not going to start wearing a
cross; I believe that, in doing so, in giving up my rational integrity for
short-term happiness, I would be sacrificing my potential as a human being -
including my potential to accomplish good.
> You might see this as a convincing argument as to why you should be a
> Singularitarian; I think this overlooks something important, however.
> If the Singularitarian is wrong, and the fatalist is correct, then the
> Singularitarians are very productively spinning their own wheels:
> they'll work very hard, harder than analogous fatalists would have
> worked, but none of the work that Singularitarians will have done will
> actually have accomplished anything good for themselves or anyone
> else.
I realize this and I accept it as a possibility. I will not resculpt my
entire life, and sacrifice my entire potential to make a difference, simply
to avoid living in a world where there is the *possibility* of frustration.
I think we should confront "unbearable" thoughts openly, so that the
instinctive flinch doesn't come to control our actions. How bad is this
possibility, really? Is it as bad as seeing the flash from a nuclear
fireball and waiting for the blast to hit, or watching a moss of goo
starting to grow inside your home?
> Furthermore, as they spent all their time imagining that they could
> postpone their own happiness to later, their lives will have been
> qualitatively less happy, less fulfilled, and less worthwhile.
Not as much as you might think. I may not have a girlfriend, but the people
who don't pass on that part of life may be missing out on other things;
missing out on the chance to know that their own life is significant,
missing out on the chance to understand their own minds, missing out on the
chance to build an AI. Nobody gets to be a complete human being this side
of the Singularity.
But that's beside the point. I probably do contribute less immediate local
happiness to the universe than I would if I lived my life a bit
differently. Those are the stakes I ante to the table because I think it's
a good bet.
> [Think how frustrating it would be to know that you've spent all that
> time working on CATAI and now GISAI/CFAI, only to learn that in 2003
> human civilization and all your work would be destroyed in nuclear
> war.]
If I learned that human civilization and all my work was to be destroyed in
nuclear war in 2003, I would drop absolutely everything else and try to stop
it. It wouldn't matter who told me that it was inevitable. Whatever tiny
mistrust I placed on the verdict of inevitability would be enough to direct
my actions.
And if Earth were still destroyed, I would regret my failure, but I would
not regret having tried.
Sometimes, when I'm trying to make people see things from a different
perspective, I point out that not getting married for 10-20 years really
isn't all that much of a sacrifice if you expect this to be an infinitesimal
fraction of your lifespan. But that really has nothing to do with it. If I
were told by an infallible prophet that, having succeeded in creating a
Friendly seed AI and nurturing it to hard takeoff, I would be run over a
truck and my brain mashed beyond retrieval thirty minutes before the
Introdus wavefront arrived, I should hope that I would live my life in
exactly the same way. For that matter, if I had handed matters off to a
Friendly SI in the midst of creating nanotechnology, after the Singularity
was irrevocably set in motion but before the Introdus wavefront as such, and
I had the chance to sacrifice my life to save three kids from getting run
over by a bus, then I hope that I would do it. "Hope," because while I can
visualize myself doing it, and I can even visualize having no regrets, the
visualization is difficult enough that I can't be sure in advance of the
acid test.
Maybe thinking like that decreases my productivity in the here-and-now, but
I think it's more important *to the Singularity* for me to try and get this
modified primate brain to support altruism than for me to try and squeeze
the maximum possible amount of work out of it in the short term. Maybe
that's just an excuse that shows that I am still thinking selfishly despite
everything, and that I value a certain kind of moral comfort more than I
value Earth. There is always that uncertainty. But in the end I would
distrust the compromise more.
*After* the Singularity, when opportunities for major altruism are
comparatively rare because there won't be as many major threats, then I
expect that your happiness and the happiness of your immediate companion
network will be the flower that most needs tending. Today the flower that
most needs tending is the Singularity.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:38 MST