From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Mon May 31 2004 - 12:05:54 MDT
Aubrey de Grey wrote:
> Eli, many thanks for writing this extremely clear and thorough piece.
> I will never think of the islets of Langerhans in quite the same way
> again....
Thou'rt welcome.
> From your mention at many points in the essay of controlled shutdown,
Er, that's just while the programmers are gradually developing the young
Optimization Process. It's not an "evil AI out of control" measure. I
speak of an Optimization Process that doesn't optimize systems for which it
doesn't have a clear decision function.
> it seems to me that you are gravitating rather rapidly to the position
> that I instinctively have on FAI, which is that a true FAI will do very
> little indeed in the way of altering our environment but will concern
> itself strictly with pre-empting events that cause huge loss of life.
> Any "interference" in more minor matters will be seen (by it, if not
> in advance by its designers) as having drawbacks in terms of our wish
> for collective self-determination that outweigh its benefits.
Hence my suggestion that the collective volition would not optimize details
of individual life, but choose a set of collective background rules. I
don't see why this means not altering the environment. People live with
quite complex background rules already, such as "You must spend most of
your hours on boring, soul-draining labor just to make enough money to get
by" and "As time goes on you will slowly age, lose neurons, and die" and
"You need to fill out paperwork" and "Much of your life will be run by
people who enjoy exercising authority over you and huge bureaucracies you
can't affect." Moral caution or no, even I could design a better set of
background rules than that. It's not as if any human intelligence went
into designing the existing background rules; they just happened.
"Self-determination" can be described in an information-theoretic sense; a
small set of collective background rules is a small amount of information,
while messing with tiny details of six billion lives is a huge amount of
information. Changing the collective background rules might change
individual lives but it wouldn't interfere with self-determination; you'd
just go on steering your own future on the new playing field.
As for *which particular* background rules I would design, to speak of that
would be sheer hypocrisy, as I have already said that to speak of this on
the SL4 mailing list would require a $10 donation to talk for 48 hours, $50
for a month, etc.
But this business of a planetary death rate of 150,000 deaths per day has
got to stop. *Now*. You have to be alive to worry about the philosophical
importance of self-determination.
> If we assume the above, the question that would seem to be epistatic to
> all others is whether the risks to life inherent in attempting to build
> a FAI (because one might build an unfriendly one) outweigh the benefits
> that success would give in reducing other risks to life. So, what are
> those benefits? -- how would the FAI actually pre-empt loss of life?
> Browsing Nick Bostrom's essay on existential risks, and in particular
> the "bangs" category, has confirmed my existing impression that bangs
> involving human action are far more likely than ones only involving
> human inaction (such as asteroid impacts). Hence, the FAI's job is to
> stop humans from doing risky things.
Or change the background rules such that risky things don't wipe out whole
planets.
> Here's where I get stuck: how
> does the FAI have the physical (as opposed to the cognitive) ability
> to do this?
Molecular nanotechnology, one would tend to assume, or whatever follows
after; nanotech seems sufficient unto the task, but perhaps an SI can do
better. My median estimate on the time required to bang together nanotech
out of available odds and ends of human civilization, a median estimate
which I freely admit to be manufactured of entire air, would be three days.
That is based on the fastest method of which I can think, which might
equally prove too pessimistic or too optimistic (note blatant abuse of
Principle of Indifference).
Three days is 450,000 deaths, and at a million-to-one speedup 8,361 years,
so I hope I missed the many obvious ways to do it faster.
> Surely only by advising other humans on what actions THEY
> should take to stop the risky actions: any other method would involve
> stopping us doing things without our agreeing on their riskiness, which
> violates the self-determination criterion.
Oh, c'mon, respecting self-determination doesn't require an FAI to be
*that* much of a wimp. There isn't that much self-determination in today's
world to respect. The average character in a science-fiction novel gets to
make far more interesting choices than a minimum-wage worker; you could
transport the entire human species into an alternate dimension based on a
randomly selected anime and increase the total amount of self-determination
going on. Not that I am advocating this, mind you; we can do better than
that. I only say that if human self-determination is desirable, we need
some kind of massive planetary intervention to increase it.
> But surely that is a big
> gaping hole in the whole idea, because the humans who obtain the FAI's
> advice can take it or leave it, just as Kennedy could take or leave the
> advice he received during the Cuba missile crisis. The whole edifice
> relies, surely, on people voting for people who respect the advice of
> the FAI more than that of human advisors. That may well happen, but it
> might not be very well publicised, to say the least.
I can't see this scenario as real-world stable, let alone ethical.
-- Eliezer S. Yudkowsky http://intelligence.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:47 MDT