From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Mon Dec 02 1996 - 22:07:17 MST
More on this:
> >In conclusion, remember that I might not be right, but if I say you're
> >wrong, you're almost certainly wrong.
First, this is a direct corollary of Sturgeon's Law.
Since 90% of everything is crap, my a priori chance of being right is
10% but your chance of being wrong is 90%.
Second, consider the nature of logical structures. A logical structure
is only as strong as any of its links. A single flaw found in a linear
logical structure suffices for its destruction. If I find a flaw in a
non-redundant part of your idea, it's gone, unless you know something I
don't or you already thought around the flaw but didn't bother to post
it, both possibilities.
So if I say you're wrong, *specifically*, you're probably wrong. It's
not that if my ideas compete with yours, mine will win. Your idea being
in competition with mine does not make it wrong. If I specifically
target your idea and bring it down, though, that's it.
My point was that, rather than your conception of the Powers being *in
conflict* with mine, I had found specific logical flaws in your
extrapolation, so *that* was out *regardless* of whether I was right.
The specific logical flaws are a bit harder to convey. I have a
cognitive model of the meme of the Powers regarding us as insignificant,
with the idea traced back to its roots in evolutionary psychology - the
paranoia of being replaced. The logical flaw in this is that there is
no reason why our paranoia should be binding on the Powers. Similarly,
there is a major logical flaw in the ethical idea that the value of our
lives is diminished by the presence of the Powers - to wit, there is no
reason to presume so, and plenty of reasons why the Meaning of Life
should be observer-independent. So our paranoia should not be retraced
as an ethical decision.
As for the idea that there might be Powers not bound by ethics, I can't
disprove it, except to say that in a goal-based cognitive system there
will always be the concept of "The Meaning of Life", the ultimate goal
in whose terms all subgoals are interpreted. A sufficiently smart Power
should question the value of all goals and subgoals and would therefore
begin a search for the Meaning of Life; whether the Power would be bound
by the newly synthesized ethical system depends on its degree of
self-redesigning capability and how its pleasure centers are cognitively
situated. There's a lot more to this, including the explanation of why
humans can engage in counter-reproductive behavior such as joining
nunneries - or why a perceived superior goal can override our instincts
- but I'm not going to go into it right now. I think that any Power
smart enough to be a serious threat to humanity will be bound by the
Meaning of Life, but that's just my professional opinion.
More on this:
> Perhaps you should have stayed on the prozac. As far as I can see,
> you are an arrogant bastard...and that's putting it nicely.
My "arrogance" is irrelevant. What counts on an Extropian mailing list
is the number of new ideas in the post and how much thought they
provoke. The truth of the ideas also counts, but not as much. I can't
resist pointing out that the number of novel, stimulating ideas in your
post is equal to zero, unlike, say, this one. Even this paragraph
introduces an intriguing new standard for the value of posts.
> Ingredi...but not Externus
Illegitimus non carborundum.
Eliezer S. Yudkowsky
-- sentience@pobox.com Eliezer S. Yudkowsky http://tezcat.com/~eliezer/singularity.html http://tezcat.com/~eliezer/algernon.html Disclaimer: Unless otherwise specified, I'm not telling you everything I know.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:52 MST