From: Samantha Atkins (samantha@objectent.com)
Date: Thu Apr 25 2002 - 02:56:38 MDT
Eliezer S. Yudkowsky wrote:
>
> Well, as a Friendly AI thinker, I do care about the integrity of my altruism
> and not just my total altruistic output - I need the maximum possible amount
> of data on what altruistic integrity looks like. So I would not compromise
> cognitive altruism to increase productivity. This also reflects the belief,
> on my part, that you can get farther by continuing without compromising than
> by taking the short-term benefits of compromising at any given point; the
> compromise may provide an easy way to move forward temporarily but it blocks
> further progress.
>
Of course you don't want short-term benefits that rob you or the
Singularitly of long-term good. However, how can maximum data
on altruism be acheived without studying what maximal altruism
looks like in real world situations and how to acheive maximal
altruism in real life situations in the full context of limits?
The only productivity you altruistically care about is the
maximum well-being of all sentients. The current sentients
cannot be totally mortgaged off for the future hopeful
multitudes. That leads to potent disasters for the current and
future sentients historically.
> In practice, this means that I try to minimize my attachment to personal
> gain. I try to, whenever I imagine a conflict of interest, imagine myself
> doing the altruistic thing. It doesn't matter whether this hurts
> productivity; it's necessary to maintain cognitive integrity.
>
Given the motivation of your efforts that which lessens your
productivity (true productivity vs some surface measure) lessens
your efficacy in acheiving those goals. Usually in a conflict
of interests it is more or less possible to examine the choices
in their effect relative to hiearchy of one's values. The
higher the value the more it should influence the choice.
>>But saying that Singularitarians would be happier now *all else being
>>equal* implies that one of the factors being kept equal would be work
>>spent on present personal satisfaction. That means that altruistic
>>Singularitarians can take fewer actions towards their own current
>>personal satisfaction than altruistic fatalists *and be just as
>>happy*.
>>
>
> I'm not sure this is factually correct, and in any case it strikes me as a
> profoundly wrong reason to be a Singularitarian. Maybe religious people are
> genuinely more happy than Extropians. But I'm not going to start wearing a
> cross; I believe that, in doing so, in giving up my rational integrity for
> short-term happiness, I would be sacrificing my potential as a human being -
> including my potential to accomplish good.
>
But if we are really interested in happiness we have to study
what truly makes us happy rather than what simply makes us
temporarily feel better. Believing something just because it
feels good is not much different from being a junkie. But I
don't think that all religion is about that in the least. Nor do
I think what some call "rational integrity" is fully conducive
to true happiness. When one's notion of "rational integrity"
becomes a ball and chain rather than wings, one's notion needs
more examination.
>>Furthermore, as they spent all their time imagining that they could
>>postpone their own happiness to later, their lives will have been
>>qualitatively less happy, less fulfilled, and less worthwhile.
>>
>
> Not as much as you might think. I may not have a girlfriend, but the people
> who don't pass on that part of life may be missing out on other things;
> missing out on the chance to know that their own life is significant,
> missing out on the chance to understand their own minds, missing out on the
> chance to build an AI. Nobody gets to be a complete human being this side
> of the Singularity.
>
Choosing among alternatives moment by moment is being a full
human being as far as I'm concerned. If having a girlfriend is
lower in your hierarchy of values and seems to be in some
conflict with higher values then your humanity is fully
expressed by choosing for your highest value. I don't expect it
to be any different after Singularity except there is a lot more
time and a lot more possibilities.
> But that's beside the point. I probably do contribute less immediate local
> happiness to the universe than I would if I lived my life a bit
> differently. Those are the stakes I ante to the table because I think it's
> a good bet.
>
I doubt it. I can't picture anyting you personally could do in
the context of these times that would make you personally any
more happy.
>
>>[Think how frustrating it would be to know that you've spent all that
>>time working on CATAI and now GISAI/CFAI, only to learn that in 2003
>>human civilization and all your work would be destroyed in nuclear
>>war.]
>>
>
> If I learned that human civilization and all my work was to be destroyed in
> nuclear war in 2003, I would drop absolutely everything else and try to stop
> it. It wouldn't matter who told me that it was inevitable. Whatever tiny
> mistrust I placed on the verdict of inevitability would be enough to direct
> my actions.
>
Yes. Same here.
> And if Earth were still destroyed, I would regret my failure, but I would
> not regret having tried.
>
> Sometimes, when I'm trying to make people see things from a different
> perspective, I point out that not getting married for 10-20 years really
> isn't all that much of a sacrifice if you expect this to be an infinitesimal
> fraction of your lifespan. But that really has nothing to do with it. If I
> were told by an infallible prophet that, having succeeded in creating a
> Friendly seed AI and nurturing it to hard takeoff, I would be run over a
> truck and my brain mashed beyond retrieval thirty minutes before the
> Introdus wavefront arrived, I should hope that I would live my life in
> exactly the same way. For that matter, if I had handed matters off to a
> Friendly SI in the midst of creating nanotechnology, after the Singularity
> was irrevocably set in motion but before the Introdus wavefront as such, and
> I had the chance to sacrifice my life to save three kids from getting run
> over by a bus, then I hope that I would do it.
Why exactly would you hope this? It is not certain that these
three would be any greater contribution or are of greater value
just because there are three of them and one of you, is it? I
am not sure I follow your reasoning here.
> Maybe thinking like that decreases my productivity in the here-and-now, but
> I think it's more important *to the Singularity* for me to try and get this
> modified primate brain to support altruism than for me to try and squeeze
> the maximum possible amount of work out of it in the short term. Maybe
Perhaps it would be more fruitful to replace "altruism" with the
maximal well-being of all of us including you and I. To me a
true Friendly AI would need to thoroughly take joy in its work
with all sentient beings, including itself. The thing to go
beyond is being stuck only on one's own being or to even see
one's being as, for many purposes, strictly limited to one's own
person. It isn't acheived, as far as I can see, by continue to
see a bunch of separate beings out there and valuing all other
beings more than or especially in exclusion of oneself.
> that's just an excuse that shows that I am still thinking selfishly despite
> everything, and that I value a certain kind of moral comfort more than I
> value Earth. There is always that uncertainty. But in the end I would
> distrust the compromise more.
That seems sort of strange. The moral "comfort" is only of
value to you in regards to the well-being of Earth or rather its
sentients and their deliverance from like doom without
Singularity through Friendly AI. If it is of more value than
your own highest goals then something seems possibly
out-of-kilter. On the other hand, beneath even Singularity is a
more core value that you are probably aware of. Perhaps you are
speaking of that?
>
> *After* the Singularity, when opportunities for major altruism are
> comparatively rare because there won't be as many major threats, then I
> expect that your happiness and the happiness of your immediate companion
> network will be the flower that most needs tending. Today the flower that
> most needs tending is the Singularity.
>
Happiness is in being fully plugged in and active toward what
one values / believes most important. There is nothing
inherently different before/after Singularity in that. If you
are not happy in what you are doing now I don't think
post-Singularity will find you any happier. Nor do I see how
what you are doing now is in truth a sacrifice. It only looks
that way from points of view that probably aren't your core.
Many of them may actually be points of view external to yourself.
- samantha
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:39 MST