Re: >H: The next 100 months

From: Sayke@aol.com
Date: Thu Oct 21 1999 - 00:37:34 MDT


In a message dated 10/19/99 7:56:21 AM Pacific Daylight Time,
sentience@pobox.com writes:

> Sayke@aol.com wrote:
> >
[snip; appy polly loggies for misinterpeting your take on my worldview]

> I don't think you explicitly think the world is fair; I think you're
> using fair-world heuristics.

    naaaaa... im using trapped-animal heuristics. im playing a big
two-outcome game, in which if i win, i stay alive to play again, and if i
lose, i dont exist. but, it seems to me, that youve looked at the game, said
"shit! thats hard! i give up; im not going to play", and proceeded to engage
in a course of action not at all unlike a complex form of suicide. is there
any effective difference? do you think you would survive the creation of a
transcendent ai? if not, why are you attempting to speed it along? im quite
curious, and thats the gist of this post...
    whats the line? "do you hear that, mr anderson? thats the sound of
inevitability... the sound of your death. goodbye, mr anderson..." baaah
humbug. if the odds say ill die, i hereby resolve to die trying to stay
alive. just because something is hard doesnt mean its impossible; slim is
better then none, etc...

> > best absolute probability of what, exactly? and why is that to be
strived
> > for? if you dont trust philosophy and you dont trust your wetware, what
do
> > you trust? ("and who do you serve?" sorry... damn that new babylon 5
> > spinoff...)
>
> The next best alternative would probably be Walter John Willams's
> Aristoi or Iain M. Banks's Culture. Both are low-probability and would
> probably require that six billion people die just to establish a seed
> culture small enough not to destroy itself.

    [note to self: read more good scifi...] but, i do have at least some idea
of what your talking about, due to shamelessly reading some of the spoilers
on this list, and i cant help thinking that you seem to completely rule out
uploading/neural engineering/whatever else... last time i checked, becoming
the singularity was still a distinct possibility. is this no longer the case?
or are you taking the position that it doesnt matter what we do; somebody,
somewhere, will make a transcendent ai, and that will be the end of us...?

> > and anyway, it seems to me that your basicly saying "the powers will
eat
> > us if the powers will eat us. their will be done on earth, as it is in
> > heaven, forever and ever, amen." damn the man! root for the underdogs!
etc...
> > (yes, i know my saying that probably has something to do with my
tribal-issue
> > wetware. so? it makes sense to me. if it shouldnt, point out the whole
in my
> > premises)
>
> Yes, your intuitive revulsion is exactly my point. I'm saying that
> there's nothing we can do about it, and you refuse to accept it. There
> may be actions that could very slightly reduce the risk of humanity
> being wiped out, like trying to wait to create AI until after a survival
> capsule has arrived at Alpha Centauri. There is no anti-AI action we
> can take that will improve our absolute chances. The optimal course is
> to create AI as fast as possible.

    and there is *no* chance that transcendent ai could be left undeveloped
for a time long enough to allow enhancement/whatever to create a true
transhumanity? if there is such a chance, regardless of how slim, i think it
should be tried... i understand that supression of new technology almost
certainly does more harm then good, but shit, what are the alternatives, and
why are they any better?
    it seems to me that you think that the absolute chance of humanity's
survival is non-modifiable. our actions modify the 'absolute' chances, do
they not? in that sense, how can any chance be absolute? just because there
is a likely attractor state that could be occupied very well by a
transcendent ai, doesnt mean that it *will* be occupied by one...
    why dont we attempt to wait forever to create a transcendent ai? why
should anyone work on one? i understand that conventional ai will become
increasingly important and useful, of course, but by not allowing programs to
modify their source code, and not allowing direct outside access to zyvex and
friends, and above all not actively working on making one, the odds of one
occurring go down considerably, do they not? you sound like they are, well,
inevitable, which i dont understand. they probably wont exist (anywhere
nearby, of course) unless we make one. why should we make one?

> To you this seems like "defeatism" - which is another way of saying that
> life is fair and there's no problem you can't take actions to solve.

    first off, yes, it does seem like defeatism, but thats not saying that
life is fair, or that my actions will be successful in even coming close to
solving the problems at hand. i can always take actions towards solving the
problem. whether it works or not is, of course, quite uncertain, and thats
not a problem. trying is better then not trying; sitting on my ass might help
the situation accidentally, true, but thats far less likely then if i
actually tried to help the situation...
    it seems to me that what your trying would reduce your odds of personal
survival considerably, and i cant figure out why.

> You're choosing plans so that they contain actions to correspond to each
> problem you've noticed, rather than the plan with the least total
> probability of arriving at a fatal error.

    i dont think anyone has nearly enough information to come close to
estimate a total probability of arriving at a fatal error. if you think you
do, enlighten me. it seems to me that the course of action i endorse has a
muuuuuuuuuch lower total probability of arriving at a fatal error then yours,
simply because no action i can take could make the outcome worse. how could
my attempts to stop the development of transcendent ai possibly result in a
worse outcome (then not trying to stop the development, or, eris forbid,
actually helping it) for me?

> > does not 'the state of having goals' depend upon personal survival?
>
> Yes.
>
> > if so, are not all other goals secondary to personal survival?
>
> No. The map is not the territory. This is like saying, "Does not the
> state of having beliefs depend upon personal survival? If so, are not
> all other facts logically dependent on the fact of my existence?"

    actually, for all practical purposes, are not all other facts logically
dependent on the fact of my existance? what good to me is stuff that im
unaware of and unaffected by? all my *experience of* and *interaction with*
other facts is logically dependent on the fact of my existance, which,
functionally, is the exact same thing as saying "all other facts are
logically dependent on the fact of my existence."
    functional solipsism, man...

> > the singularity is not, to me, intuitively obvious as "the thing to
do
> > next." and, i do not trust any kind of intuition, if i can help it. why
do
> > you? yes, im asking for to justify your reliance on intuition (if thats
what
> > it is), and thats philosophy. if you will not explain, please explain
why you
> > will not explain.... heh.
>
> Maybe I'll post my intuitional analysis in a couple of days. But
> basically... the world is going somewhere. It has momentum. It can
> arrive either at a nanowar or at the creation of superintelligence.
> Those are the only two realistic alternatives.

    well, i dont know if i can agree with the part about the "world going
somewhere." evolution happens, of course, but you sound to me like your
trending towards a "there is a plan" anthropicish mentality, which im
surprised to hear from you. elaborate, if you will.
    i agree that those are the only two realistic alternatives. however, i
dont see why you would possibly be trying to assist in the development of any
superintelligence other then yourself. what worthwhile goal does that serve?
you seem to have elevated it to the status of an end unto itself...? why!?
hehe...

> Anything else, from
> _Aristoi_ to _Player of Games_, is simply not plausible on the cultural
> level. Our choice is between destruction and the unknowable. And
> that's the only real choice we have.

    im not concerned about the unknowable, yet... i dont need to know it,
right now, and i can always work on figuring it out later. but, im pretty
concerned about the "destruction" part, obviously. unless im missing
something, you arnt. why?

> > and are intuitions not a function of your tribalistic and vestigial
> > wetware, as well as my instinct for survival?
>
> Yes, but my intuitions about factual matters actually *work*. That's
> why I rely on them, the same reason I rely on logic. My intuitions
> about moral and social matters are as untrustworthy as anyone's, of course.

    whats an intuition, whats a factual matter, and how do you seperate the
intuitions that work from [confirmation bias? fuck, whats that called... ],
and how do decide when one actually worked? "i remember feeling like
something similar to this was going to happen" doesnt cut it. that seems way
to much like theology for my tastes... shit. well, when you decide to post
that writeup about your intuitions, im all ears... eyes... sensory organs...
    fuck proofreading. im tired. o yea: kudos to ya for your writup at
salon...

"we are currently lodged in the asshole of an exponential growth curve, and
it is my sincere hope that we soon come rocketing out and into the
smooth-walled blue water of the future." -- my comrade 13fingers

sayke, v2.3.05



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:34 MST