Re: fluffy funny or hungry beaver?

From: Eugen Leitl (eugen@leitl.org)
Date: Sat Jun 08 2002 - 14:34:54 MDT


On Sat, 8 Jun 2002, Eliezer S. Yudkowsky wrote:

> While I understand that you don't want AI efforts to succeed, Eugen,

I like AI just fine. I just don't like >human AI in a runaway loop, while
we're trapped outside, reduced to observer role. It gives me phantom pains
in the whole body.

> this is no call to deliberately misrepresent what we are trying to do
> and why.

I wish you'd stop pointing people to an unreadable >800 kBytes document,
and would answer a few direct questions on list. As you notice, I'm not
criticing any details of your design, because I don't know them. I'm just
describing the constraints any controlling arbiter (your system included)
must follow.
 
> The question of how a seed AI team can avoid exerting undue influence
> over humanity's future - and when you're talking about humanity's
> entire future, ANY personal influence is "undue influence" - is one of
> the deep questions to which Friendly AI is intended as an answer. It

Creating a seed AI and succeeding is clearly an influence over humanity's
future, and a rather large and irreversible one at that. Talking about
"deep questions" and "undue influence" in this context takes some nerve.

Sorry, as a human you're too limited to allow decisions on this scale. I
don't trust anybody's judgement on this.

> is, furthermore, an answer that intrinsically requires exploring
> issues of Friendly AI. Even a two-thirds majority vote of every
> sentient being on Earth could still make moral errors or lack the
> entitlement to enforce its consensus view on others. For as long as

The good part about enforcing consensus is that none of the players is
omnipotent. I'd rather not see a manmade god make a moral error of any
magnitude, thankyouverymuch.

> you assume an AI is a proxy, despotic or otherwise, there will be no
> moral answers to how you can pass on morality to an AI. A Friendly AI

Morality is not absolute --> there is no Single Golden Way to Do It.

> needs to embody those moral principles that govern what kind of
> morality it is legitimate to pass on to an AI, not just the moral
> principles its creators happened to have at the AI's moment of
> creation. This inherently requires delving into questions of
> cognition about morality and not just moral questions of the sort
> usually argued over. The question of how to build an AI that is an
> independent moral philosopher is *not* equivalent to, and in some
> cases barely touches upon, the question of what sort of moral material
> a Friendly AI should initially be given as an experiential learning
> set.

Meaningless metalevel description, not even wrong. Tell me where you
derive your action constraints from to feed into the enforcer. <--- that's
a genuine, very answerable question.
 
> I've heard you sketching your scenarios for how a team of five uploads
> who deliberately refuse intelligence enhancement will forcibly upload
> millions or billions of other humans and scatter them over the solar
> system before the Singularity begins. I will simply leave aside the

1) we don't know whether a hard edged Singularity will occur naturally,
   so please don't try to precipitate one just because we can.

2) that was an ad hoc scenario. Idiotic enough to not even put it up to
   generic discussion.

3) please kindly remove the "forcibly upload" from it, okay? Since you're
   the one who's mentioning "misrepresenting" so often.

> ethical questions and note that this is very, very hard, far too hard
> to be pragmatically feasible. Take care that you don't shoot your own

Let's agree to disagree about what is hard and what is easy. If there is a
Singularity I think making its early stage kinetics less fulminant is
considerably easier than building a (f|F)riendly singleton, and doesn't
run risks of blighting this place to boot.

> foot off by misrepresenting Friendly AI for the sake of advancing what
> you see as your *most* desirable future, because even from your

If you don't want your viewpoint misrepresented, I suggest you drop vague
accusations towards me and address where I've supposedly wronged you in a
direct discussion with concrete information (which reminds me, there was a
postponed message to our last exchange which I never finished and can't
find right now, damn. I'd be very thankful if you could resend that
message, or point it out in the archive).

> viewpoint there will still be a chance that events move through seed
> AI, and in that event you'd damn well better hope that Friendly AI

Critical seed AI research in machina is potentially extremely dangerous,
and needs to be regulated for the duration of the vulnerability window.
While far from being an optimal solution, I'm not yet aware of a better
one.

> theory advances as far as possible. In short, play nice and remember
> that your real enemies in this issue probably don't subscribe to the
> Extropians list.

I'm sorry if my online persona is pure vitriol, this is not desired. I
think there's value in researching (f|F)riendliness in seed AI, especially
if you can produce a waterproof argument. You're so far not making much
sense, however.

I should probably bite the bullet and comment on
http://singinst.org/CFAI.html (is that the proper version?). I probably
shouldn't bother with Crit, and just annotate the thing in straight HTML.



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:40 MST