From: Twirlip of Greymist (phoenix@ugcs.caltech.edu)
Date: Tue Dec 03 1996 - 00:21:20 MST
[Night of the Jaded Old-Timers]
On Dec 2, 11:07pm, Eliezer Yudkowsky wrote:
} My point was that, rather than your conception of the Powers being *in
} conflict* with mine, I had found specific logical flaws in your
(I'm just grabbing the handy post.) How can you be sure that your model
of the Beyond is extrapolatable. You say that you consider yourself to
be an incomplete human, and you feel a vast sense of awe when reading
GEB. But consider BLoop and FLoop and GLoop. In my (human-optimistic)
model you're in the position of BLoop (with some peculiar advantage)
admiring and appreciating the difference between itself and FLoop, which
the latter might not fully do, and extrapolating to some vastly more
powerful GLoop, which I (FLoop) don't believe in. It's possible that
GLoop is out there, beyond our imagination, but it also might be that
FLoop is the end point of algorithmic possibility. Similarly, if humans
(at least some of them) are the sentient equivalent of FLoop or the
Universal Turing machine, then Powers will be smarter -- vastly smarter
-- than us in inventability, or what they consider obvious, but
equivalent in comprehensability.
I do thank you for giving me these terms though; I'm very happy with being
able to say that future minds could be smarter in inventing things we
couldn't in practical time, but with our still being able to understand
the results, at least in theory at worst.
I'm not sure that I'm right. I can't be; even if someone could be, I
don't know the mathematics yet to appreciate the proof if there was one.
But I don't see how you can be so sure that you're right, or that we're
so wrong. You can't extrapolated up a damage gradient and assume the
curve will keep on going; it could plateau at some universal point.
I do think the Singularity can be a legitimate term. It can be
poetical, like 'sunrise'. It can be technical, for SF authors, like
Vinge or Niven ("Safe at Any Speed"): even if I'm right about it being
comprehensible ex post facto, that doesn't mean it's very inventable.
By definition, lots of rapid change should/could cause much evolutionary
change in society as markets and cultures adjust to new conditions. As
I think the brain works evolutionarily, obvious there will be a limit to
what it can model decently.
And if someone turns the Earth into gray goo, organizes the goo into his
new brain, and uses his new power to solve game #11982 (or whichever) of
FreeCell: this is comprehensible. And inventable, obviously. But _I'd_
call it a Singularity.
(Very singular.)
(Sorry.)
} no reason why our paranoia should be binding on the Powers. Similarly,
} there is a major logical flaw in the ethical idea that the value of our
} lives is diminished by the presence of the Powers - to wit, there is no
} reason to presume so, and plenty of reasons why the Meaning of Life
} should be observer-independent. So our paranoia should not be retraced
Er. Value of our lives as valued by whom? My value of my life is
infinite. (Probably not, but lets approximate.) My value of a million
strangers is somewhat low. Presumably person J's value of her life is
infinite and her value of mine is low. Lower now that I've typed this.
:) My value of my life doesn't change in the presence of Powers -- but
what's their value of my life?
And why should the Meaning of Life be observer-independent? Meaning of
whose life? My meaning of a chicken's life is to feed me.
(How many vegetarians on the list? How many for reasons other than
health?) (Rich Artym still here? Do I have to be the subjectivist?)
Merry part,
-xx- Damien R. Sullivan X-) <*> http://www.ugcs.caltech.edu/~phoenix
For no cage was ever made to hold a creature such as he,
No chain was ever forged to bind the wind.
And when I think upon all the scars he left on me,
The cruelest are the ones I bear within.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:52 MST