From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Jan 14 2001 - 00:36:03 MST
John Marlow wrote:
>
> **One: Yeah--the thing behaves like a human because
> it's programmed by humans and likely monkeyed around
> with by military types, whose thinking I don't care
> for.
I hope not. I hope that we can finish before the military types start
monkeying.
> **Two: However the thing starts off, it glitches/goes
> chaotic/turns psychotic and can't be shut off.
Yes, that is the risk. It is an *unavoidable* risk - at least,
unavoidable except by exterminating all sentient life in the solar
system. It was implicit in the dawn of intelligence - the moment when
intelligence begins to enhance itself.
> **Three: The thing behaves NOTHING like a human--is,
> in fact, completely alien. The basis of its decisions
> will therefore be unknowable.
Nothing is completely knowable. The basis of a Friendly AI's decisions is
certainly not completely unknowable. Beating the maximum human kindliness
is probably more than good enough.
> It might push us to the
> limits of the universe and help us create an
> everlasting utopia--or exterminate us all tomorrow for
> a reason we couldn't even guess BECAUSE it's nothing
> like a human.
Yes, that is the Great Coinflip. Again, my point is that this is an
ineradicable coinflip unless you really think that humanity can spend the
next billion years at exactly this intelligence level. If not, sooner or
later we need to confront transhumanity. Every extra year is another year
we have the opportunity to exterminate ourselves, so sooner is better.
Can't make it to utopia if the goo eats through an artery.
> It is therefore a constant threat whose
> level at any given instant is not unly unknown but
> unknowable. And, being alien--it may well view us the
> same way.
Why are you attributing anthropomorphic xenophobia to an alien you just
got through describing as unknowable?
> **Never said it would. The more power it/they have,
> the greater the danger, was my point.
Anything above and beyond the simple fact of superintelligence is
irrelevant overkill.
> > As it happens, AI is much less risky than
> > nanotechnology. I think I know
> > how to build a Friendly AI. I don't see *any*
> > development scenario for
> > nanotechnology that doesn't end in a short war.
>
> **Yah--a VERY short war...
Yes, that was the implication.
> > Offensive technology has
> > overpowered defensive technology ever since the
> > invention of the ICBM, and
> > nanotechnology (1) makes the imbalance even worse
> > and (2) removes all the
> > stabilizing factors that have prevented nuclear war
> > so far.
>
> **Bingo.
Okay, you get this, you get the massive cloud of doom, you even get the
part where transhumanity is our chance out of it, so why are you still
focusing on the *totally ineradicable* risk of unfriendly transhumanity?
> If nanotech,
> > we're screwed.
>
> **If nano goes amok or is in the wrong hands, you
> mean, I think. Of course, any hands may be wrong
> hands.
Let's see, what is the probability that nano winds up in at least one pair
of wrong hands... this is Earth, right? We're screwed.
> The probability of humanity's
> > survival is the probability
> > that AI comes first times the probability that
> > Friendly AI is possible
> > times the probability that Friendliness is
> > successfully implemented by the
> > first group to create AI.
>
> **What, in your estimation, is the probability that
> military interests will seize the operation/the AI at
> the critical time? That they have/will infiltrate
> AI-creation efforts for this and monitoring purposes?
> Remember--they're thinking like hunter-gatherers,
> too...
This requires a military mind sufficiently intelligent to get why AI is
important and sufficiently stupid to not get why Friendliness is
important. Besides, what would you have me do about it?
> If Friendly AI is
> > impossible, then humanity is
> > screwed, period.
>
> **Best info sources on this issue and on your take of
> this issue? On AI/SI?
I thought you'd never ask. I'd recommend:
http://singinst.org/intro.html
http://sysopmind.com/sing/PtS/navigation/deadlines.html
http://singinst.org/CaTAI.html
You can find some non-Yudkowskian material at:
http://dmoz.org/Society/Philosophy/Current_Movements/Transhumanism/
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:04:51 MST