"Burdens" and what to do with them

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jun 02 2002 - 11:59:10 MDT


Lee Corbin wrote:
>
> The reasons for the technical and commercial world dominance by
> Europeans starting in the 15th century are becoming clearer to
> historians and sociologists. The best starting place for such
> inquires is "Germs, Guns, and Steel" by Jared Diamond.
>
> What ought to be done by those with the ability and the power?
> (Read: a singularity-nexus will be in exactly this position.)

I'm glad you phrased this as "singularity nexus" and not "the people who
create the singularity nexus". The people who create the singularity nexus
have the responsibility not to exert undue influence on the singularity
nexus. Since, under my best understanding of FAI, the same architecture
that makes the goal system stable and what we would regard as commonsensical
is the same architecture that gives an FAI the power to conduct its own
moral reasoning, a deliberate attempt to exert undue influence seems to me
to indicate a profound misunderstanding of what FAI is about and, more
importantly, how to build it, and hence doomed to end in disaster. It is
the only instance I know of where "Hubris...Ate" or purity of motive
actually is a good heuristic for engineering reasons and not a meme which is
repeated merely because it sounds wise.

> This reminds me the noble yet patronizing urge to extend
> a helping hand to the "less fortunate". As I said earlier
> about "minding your own business", once a person's stomach is
> full, the urge to meddle in other people's affairs becomes
> irresistible. But where exactly, or how, to draw the line
> between true charity that actually improves the lot of life
> in the universe, and that which only makes the giver feel
> good and incidentally extends his power?

One good start lies in studying the evolutionary psychology which reveals
why it is that, once your stomach is full, you experience an urge to meddle
in other's affairs. It often helps to single out the thoughts which are a
product of that psychology. Emergent thoughts on how to benefit others will
tend to become fixed as attractors to the extent that they benefit the
inclusive reproductive fitness of the thinker, not the supposed
beneficiaries. Any would-be altruist has to do an incredible amount of
cleaning up before the resulting thought processes are interpretable as
"altruism" and not "evolution dangling a self-conceived altruist on puppet
strings". Of course, using this fact as an excuse to be cynicism about
altruism in general is simply a case of being dangled on puppet strings by
the "sophisticated cynic" archetype and invoking rationalization in the
service of selfishness. Fleeing from the murky history of altruism simply
lands you squarely in the still more murky history of selfishness, so you're
stuck with the cleanup job.

> I think that this is an incredibly difficult question, and
> I sure don't have a clear idea how to get answers. But
> whatever "advice" we give, note that it should apply equally
> well to Kipling and his British friends as it would to an
> advancing wave front of a technically superior civilization
> that reaches Earth. And, more ominously and far more likely,
> to our own home grown Singularity.

Figuring out how to build an FAI that can solve this moral problem - at
least as well as the civilization that built it could have - is not as
satisfying to our political instincts as directly arguing about morality,
but it is also far more useful, since I tend to take it as given that any
morality arrived at by human intelligence will not be optimal. Of course,
this is a far deeper question than most moral arguments, in the same way
that building an AI is more difficult than solving a problem yourself; but
unlike moral argument, there is a hope of arriving at an adequate answer.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:33 MST