Re: Paradox--was Re: Active shields, was Re: Criticism depth, was Re: Homework, Nuke, etc..

From: John Marlow (johnmarrek@yahoo.com)
Date: Sat Jan 13 2001 - 22:17:37 MST


**What can I say? But for the goal, you seem eminently
reasonable, as always.

--- "Eliezer S. Yudkowsky" <sentience@pobox.com>
wrote:
> John Marlow wrote:
> >
> > > > In which case, there may not be a damned
> > > > thing we can do about it.
> > >
> > > Yep.
> >
> > **GAAAK! And you proceed! Or are you simply that
> > convinced of our own incompetence? (But then, this
> > would actually argue AGAINST proceeding, if you
> think
> > about it...)
>
> Why, oh why is it that the machinophobes of this
> world assume that
> researchers never think about the implications of
> their research?

**Me no machinophobe; see nanotech advocacy.

  When I
> was a kid, I thought I'd grow up to be a physicist
> like my father... maybe
> work on nanotechnology... I am here, today, in this
> profession, working
> on AI, because I believe that this is not just the
> best, but the *only*
> path to survival for humanity.

**Why?

>
> > **No--I am, for example, strongly in favor of
> > developing nanotechnology, and that is by far the
> > biggest risk I can see.
> >
> > **Building an AI brighter than we are might run a
> > close second. That I'm not for.
> >
> > **The first offers nearly limitless advantages for
> the
> > risk; the second offers only certain disaster.
> It's a
> > risk-benefit analysis.
>
> On this planet *right now* there exists enough
> networked computing power
> to create an AI.

**Now, forgive me if I've missed something here
(please point it out)--but isn't AI a matter of
quality rather than quantity?

  Stopping progress wouldn't be
> enough. You'd have to
> move the entire world backwards, and keep it there,
> not just for ten
> years, or for a century, but forever, or else some
> other generation just
> winds up facing the same problem. It doesn't
> matter, as you describe the
> future, whether AI presents the greater or the
> lesser risk; what matters
> is that a success in Friendly AI makes navigating
> nanotech easier,

**Whoa. Lemme get this straight--you want to create
something brighter than us, and then put IT in charge
of developing nanotech? (Handing it, to be "paranoid,"
the best and most rapid means with which to get rid of
us, should it choose to do so.)

 while
> success in nanotechnology doesn't help AI. In fact,
> nanotechnology
> provides for computers with thousands or millions of
> times the power of a
> single human brain, at which point it takes no
> subtlety, no knowledge, no
> wisdom to create AI; anyone with a nanocomputer can
> just brute-force it.

**See above--quality v quantity (brute-force).

>
> It's very easy for me to understand why you're
> concerned. You've seen a
> bunch of bad-guy machines on TV, and maybe a few
> good-guy machines, and
> every last one of them behaved just like a human.

**Oh don't blame it all on Jim Cameron. On the
contrary, I have three concerns:

**One: Yeah--the thing behaves like a human because
it's programmed by humans and likely monkeyed around
with by military types, whose thinking I don't care
for.

**Two: However the thing starts off, it glitches/goes
chaotic/turns psychotic and can't be shut off.

**Three: The thing behaves NOTHING like a human--is,
in fact, completely alien. The basis of its decisions
will therefore be unknowable. It might push us to the
limits of the universe and help us create an
everlasting utopia--or exterminate us all tomorrow for
a reason we couldn't even guess BECAUSE it's nothing
like a human. It is therefore a constant threat whose
level at any given instant is not unly unknown but
unknowable. And, being alien--it may well view us the
same way.

> Your reluctance to
> entrust a single AI with power is a case in point.
> How much power an AI
> has, and whether it's a single AI or a group, would
> not have the slightest
> impact on the AI's trustworthiness.

**Never said it would. The more power it/they have,
the greater the danger, was my point.

  That only
> happens with humans who've
> spent the last three million years evolving to win
> power struggles in
> hunter-gatherer tribes.
>
> As it happens, AI is much less risky than
> nanotechnology. I think I know
> how to build a Friendly AI. I don't see *any*
> development scenario for
> nanotechnology that doesn't end in a short war.

**Yah--a VERY short war...

> Offensive technology has
> overpowered defensive technology ever since the
> invention of the ICBM, and
> nanotechnology (1) makes the imbalance even worse
> and (2) removes all the
> stabilizing factors that have prevented nuclear war
> so far.

**Bingo.

  If nanotech,
> we're screwed.

**If nano goes amok or is in the wrong hands, you
mean, I think. Of course, any hands may be wrong
hands.

  The probability of humanity's
> survival is the probability
> that AI comes first times the probability that
> Friendly AI is possible
> times the probability that Friendliness is
> successfully implemented by the
> first group to create AI.

**What, in your estimation, is the probability that
military interests will seize the operation/the AI at
the critical time? That they have/will infiltrate
AI-creation efforts for this and monitoring purposes?
Remember--they're thinking like hunter-gatherers,
too...

  If Friendly AI is
> impossible, then humanity is
> screwed, period.

**Best info sources on this issue and on your take of
this issue? On AI/SI?

john marlow

>
> -- -- -- --
> --
> Eliezer S. Yudkowsky
> http://singinst.org/
> Research Fellow, Singularity Institute for
> Artificial Intelligence

__________________________________________________
Do You Yahoo!?
Get email at your own domain with Yahoo! Mail.
http://personal.mail.yahoo.com/



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:04:51 MST