From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Oct 21 1999 - 08:30:25 MDT
Sayke@aol.com wrote:
>
> In a message dated 10/19/99 7:56:21 AM Pacific Daylight Time,
> sentience@pobox.com writes:
> > [snip]
> naaaaa... im using trapped-animal heuristics. im playing a big
> two-outcome game, in which if i win, i stay alive to play again, and if i
> lose, i dont exist. but, it seems to me, that youve looked at the game, said
> "shit! thats hard! i give up; im not going to play", and proceeded to engage
> in a course of action not at all unlike a complex form of suicide.
No, I'm walking out of the game. I don't know if there's anything
outside... but I do know that if I keep playing, sooner or later I'm
going to lose.
> is there
> any effective difference? do you think you would survive the creation of a
> transcendent ai?
Flip me a coin.
> if not, why are you attempting to speed it along?
>From your perspective: A coinflip chance is better than certain death.
>From my perspective: It's the rational choice to make.
> im quite
> curious, and thats the gist of this post...
> whats the line? "do you hear that, mr anderson? thats the sound of
> inevitability... the sound of your death. goodbye, mr anderson..." baaah
> humbug. if the odds say ill die, i hereby resolve to die trying to stay
> alive. just because something is hard doesnt mean its impossible; slim is
> better then none, etc...
So why are you applying this logic to avoiding AI rather than to
creating it?
> > The next best alternative would probably be Walter John Willams's
> > Aristoi or Iain M. Banks's Culture. Both are low-probability and would
> > probably require that six billion people die just to establish a seed
> > culture small enough not to destroy itself.
>
> [note to self: read more good scifi...] but, i do have at least some idea
> of what your talking about, due to shamelessly reading some of the spoilers
> on this list, and i cant help thinking that you seem to completely rule out
> uploading/neural engineering/whatever else... last time i checked, becoming
> the singularity was still a distinct possibility. is this no longer the case?
> or are you taking the position that it doesnt matter what we do; somebody,
> somewhere, will make a transcendent ai, and that will be the end of us...?
I'm taking the position that the hostile-Power/benevolent-Power
probabilities will not be substantially affected by whether the Power
starts out as an uploaded human or a seed AI. If you can transcend and
remain benevolent, then I can write a seed AI that will remain
benevolent. <borgvoice>Selfishness is irrelevant.</borgvoice>
I'm also saying that a hostile Power is not "me" even if, historically,
it started out running on my neurons.
Not that I care about any of this, but you do.
> and there is *no* chance that transcendent ai could be left undeveloped
> for a time long enough to allow enhancement/whatever to create a true
> transhumanity?
Sure. Around one percent.
> if there is such a chance, regardless of how slim, i think it
> should be tried...
Okay, you're giving up a fifty percent chance of surviving with AI in
favor of a one percent chance of creating transhumanity that *still*
leaves you with a fifty percent chance of winding up with hostile
Powers. You see what I mean about not being able to calculate risks?
> i understand that supression of new technology almost
> certainly does more harm then good, but shit, what are the alternatives, and
> why are they any better?
The alternative is seeing what's on the other side of dawn instead of
trying to run awy from the sunrise.
> it seems to me that you think that the absolute chance of humanity's
> survival is non-modifiable.
It's easy to modify it. Start a nuclear war. Ta-da! You've modified
our chances.
I think that the absolute chance of humanity's survival is
non-*improvable* over the course I've recommended.
> our actions modify the 'absolute' chances, do
> they not? in that sense, how can any chance be absolute? just because there
> is a likely attractor state that could be occupied very well by a
> transcendent ai, doesnt mean that it *will* be occupied by one...
> why dont we attempt to wait forever to create a transcendent ai? why
> should anyone work on one?
*I* will be working on one. If I stop, someone else will do it. You
cannot prevent an entire advanced civilization from creating AI when
anyone can buy the computing power for a few bucks and when the profits
on an incrementally improved AI are so high. If humanity does not
create nanoweapons, one of the six billion members *will* create an AI
eventually. If humanity does not create AI, one of the hundred and
fifty countries *will* start a nanowar.
> i understand that conventional ai will become
> increasingly important and useful, of course, but by not allowing programs to
> modify their source code,
I see - you're claiming that an entire industry is going to ignore the
profits inherent in self-modifying AI? And so will all the idealists,
among whose number I include myself?
> and not allowing direct outside access to zyvex and
> friends, and above all not actively working on making one, the odds of one
> occurring go down considerably, do they not?
No, the odds of AI occurring before nanowar go down. The odds of AI
being created in the long run, given the survival of humanity, remain
the same - as close to one as makes no difference.
> you sound like they are, well,
> inevitable, which i dont understand. they probably wont exist (anywhere
> nearby, of course) unless we make one. why should we make one?
Six billion people can't take sips from a tidal wave. Nanodeath or AI.
One or the other. Can't avoid both.
> > To you this seems like "defeatism" - which is another way of saying that
> > life is fair and there's no problem you can't take actions to solve.
>
> first off, yes, it does seem like defeatism, but thats not saying that
> life is fair, or that my actions will be successful in even coming close to
> solving the problems at hand. i can always take actions towards solving the
> problem. whether it works or not is, of course, quite uncertain, and thats
> not a problem.
It most certainly is a problem! This is exactly the kind of "Do
something, anything, so that we'll be doing something" attitude that
leads to the FDA, the War on Drugs, Welfare, and all kinds of
self-destructive behavior. Doing something counts for *nothing* unless
you *succeed*.
> trying is better then not trying;
No. Winning is better than losing.
> sitting on my ass might help
> the situation accidentally, true, but thats far less likely then if i
> actually tried to help the situation...
> it seems to me that what your trying would reduce your odds of personal
> survival considerably, and i cant figure out why.
I think striving for AI considerably increases my odds of personal survival.
> > You're choosing plans so that they contain actions to correspond to each
> > problem you've noticed, rather than the plan with the least total
> > probability of arriving at a fatal error.
>
> i dont think anyone has nearly enough information to come close to
> estimate a total probability of arriving at a fatal error. if you think you
> do, enlighten me.
Absolute probabilities, no. The relative value of two probabilities, yes.
> it seems to me that the course of action i endorse has a
> muuuuuuuuuch lower total probability of arriving at a fatal error then yours,
> simply because no action i can take could make the outcome worse. how could
> my attempts to stop the development of transcendent ai possibly result in a
> worse outcome (then not trying to stop the development, or, eris forbid,
> actually helping it) for me?
It's called "nanotechnological weapons". Red goo can kill you just as
dead as hostile Powers. The difference is that we don't *know* whether
or not Powers will be hostile. Flip a coin. But with red goo, dead is dead.
Delay AI, get killed by goo. Wouldn't you look silly if, all along, the
Powers would have turned out to be benevolent? Oh, yes. Your actions
can make it *much* worse.
> actually, for all practical purposes, are not all other facts logically
> dependent on the fact of my existance?
No. If you die, the rest of the world will still be here. The speed of
light will remain constant.
> functional solipsism, man...
Okay, so you're nuts. Why should I care?
> > Maybe I'll post my intuitional analysis in a couple of days. But
> > basically... the world is going somewhere. It has momentum. It can
> > arrive either at a nanowar or at the creation of superintelligence.
> > Those are the only two realistic alternatives.
>
> well, i dont know if i can agree with the part about the "world going
> somewhere." evolution happens, of course, but you sound to me like your
> trending towards a "there is a plan" anthropicish mentality, which im
> surprised to hear from you.
Momentum doesn't imply a plan. It implies a trend with powerful forces
backing it.
> i agree that those are the only two realistic alternatives. however, i
> dont see why you would possibly be trying to assist in the development of any
> superintelligence other then yourself. what worthwhile goal does that serve?
I know what goal *I* think it serves. From your perspective... the
survival of humanity?
> you seem to have elevated it to the status of an end unto itself...? why!?
> hehe...
Go read my Web pages.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:34 MST