on the odds of nanowar, and a bit on functional solipsism

From: Sayke@aol.com
Date: Sun Oct 24 1999 - 04:20:09 MDT


hi.

    well, its been a couple of days since my last attempted parry-thrust at
eliezer's points over the questions concerning whether we should actively try
to make transcendent ai. i blame this delay on a complex series of events
involving all kinds of mundane things... but anyway, i think our
motavational/ethical differences might very well stem from 2 premise
differences. i feel longwinded today. stand clear, no step, you have been
warned. anyway, here goes.

        perceived premise difference 1:
    sayke: nanowar* is _less_ likely to occur then human pseudo-uploading*
before the creation* of a transcendent ai*.
    eliezer: nanowar is _more_ likely to occur then human pseudo-uploading
before the creation of a transcendent ai.
        where:
    nanowar = red goo; not caused by a transcendent ai. will kill me/us
almost certainly.
    human pseudo-uploading = nondestructive, probably nanotech-assisted,
personal transcendence. method? whatever method turns out to be doable, if
any... will increase the odds that i/we live* for a very long time.
    creation = this is accidental creation; that is to say, without our (mine
or eliezer's or other people reading this) help, and even in spite of our
efforts to slow it down and do it in a sanitary ai playground first. this
means somebody will do it with their basement beowulf cluster, or that
commercial ai will quietly transcend without anybody's help. unofficial
creation; beyond-our-control creation; whatever...
    transcendent ai = big, selfevolving, badass, ai. will probably (please,
other plausible options?) either eat the solar system, including me/us, or
leave me/us alone and take off into space, or transcendify me/us into
badassness, in that order of plausibility, with several orders of magnitude
between each likelihood...
    live = to continue personal existence; involves discussion of functional
solipsism, enlightened self-interest and other ethical/motavational concerns.
ill get back to this; why trying to live is good is what my second premise
difference is about. shit, this setup feels clumsy...

    alrighty. i would first (first?) like to say that, quite frankly, i made
the mistake of being on a trend towards equating the odds concerning nanowar
with the odds concerning more conventional wars of mass destruction. i
thought nanowar is fairly unlikely, because of fairly conventional
mutually-assured-destruction type arguments, and because i was
underestimating the potential that comes with putting accidents, suicidal
people, and nanotech all into your bong, at once, and lighting up. i hereby
suspend that opinion, because i really think i need to get the some input on
this.
    however, my main argument in support of my premise would be that, by the
time nanotech is advanced and common enough to make in a nanowar likely,
nanotech would be advanced enough to transcendify yours truly (aka me/us). it
seems to me that nanowar is unlikely to be started by the intentional act of
a Large Power, for MAD-ish reasons. it basically seems to me that nanowar
would be probably be started either by accident, or by Evil Genius, and
personal ascension could very well happen before this. comments? does this
hold?
    alright... on to the second premise.

        perceived premise difference 2:
    sayke: functionally*, significance* _ceases_ to exist if i* dont exist*
to label it as such.
    eliezer: functionally, significance _continues_ to exist if i dont exist
to label it as such.
        where:
    functionally = for all practical purposes, and from my individual point
of view, and for use in an ethical/motavational system.
    significance = motivation points. value... etc... i dont think i need to
try to define this with terrible rigor, because our definitions of
significance are probably pretty similar... threads here have probably been
based around discussion of significance... if you wanna give me pointers to a
relevant past thread, or if you wanna start another one about this, go for
it...
    i = the point of view that i am... that which bestows significance. there
have been threads about this. ive seen em. again, if you wanna give me
pointers to a relevant past thread, or if you wanna start another one about
this, go for it...
    dont exist: death is presumed to be the end of personal existence...

    by definition im right ;) but, i dont think ill be let off that easy.
still... it seems to me that, well, by definition, im right. all significance
i can interact with goes away when i do, right? is that not exactly the same,
functionally, as saying "all significance, period, goes away when i do"? if
not, whats the difference? or am i just missing something obvious that i
really should know about...? is it just my turn to do the "im wrong" dance?
hehe...
    next episode, on extropy-l: does sayke do the "im wrong" dance? or will
he go postal on a herd of innocent churchgoers with a brace of flare guns?
catch extropy-l, same time, next week, only on channel 23; 'your source for
nifty shit (tm).' brought to you by JOE'S EATS. 'hungry? eat at joe's!
(tm)'...
    alright, im tired. may you live in interesting times!

sayke, v2.3.05



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:36 MST