Re: Singularity-worship

From: Eliezer Yudkowsky (sentience@pobox.com)
Date: Wed Dec 11 1996 - 12:59:05 MST


> Sounds like a fine idea, but I used no sophistry.

You weren't the one I was thinking of, actually.

To quote (by permission) Eric Watt Forste:
> The word "self-justifying" is empty of content. Justification is a
> relationship between two distinct information structures. If *anything*
> is self-justifying, then everything is self-justifying; hence, the
> phrase "self-justifying" does not distinguish a set to which it applies
> from a set to which it does not apply. Terms that do not distinguish are
> empty of content. Or perhaps you are prepared to explain to me what is
> the precise difference between self-justifying and non-self-justifying
> entities?

> When you claim to have a
> solution to the deepest problem in Philosophy you should be prepared to
> defend it vigorously, have answers to difficult questions, and not just
> expect everyone to automatically call you a genius.

I am not objecting to genuine challenges to my ideas. A reflexive
argument that can be instantiated with any word whatsoever is not a
genuine challenge. Only the last sentence had any logical value, and I
answered that with an example.

I have explicitly denied a solution to the First Cause AND I have
explicitly said that the First Cause is not the deepest problem in
philosophy.

And what is this whole business where every time I give a rational reply
to an objection (rational or otherwise), people respond with a personal
attack? Whether I am a genius has no impact on the logical strength of
my ideas on the First Cause, so I don't really care, speaking in my
capacity as a philosopher, whether people call me a genius or not. My
capacity as the originator of Algernon's Law might have different ideas;
there, my intellectual capacities are actually relevant.

--------
> That would seem to easily cover both of you cases. To repeat for the third
> time, what's the difference between cognitive and computational causality?

I am not claiming that cognitive causality will not run on a Turing
machine. I am saying that, when discussing "causality", one must be
aware of how one's own cognitive architecture affects one's viewpoint.
It is not particularly relevant whether our causal-analysis modules will
run on a Turing Machine; the important thing is to say how they work.
We know how Turing machines work. Do you know how causal analysis
works? Can you design a computer program that will do it? A Universal
Turing Machine is not an acceptable answer to this AI problem.

"Cognitive causality" is how humans perceive causality. It is the study
of those cognitive modules which perform causal analysis. It is
distinct from the study of Turing machines, much as designing
spreadsheet programs is distinct from the study of Turing machines.

> > Disclaimer: Unless otherwise specified, I'm not telling you
> > everything I know.
> I think you're telling me far more than you know.
After much work, I have thought up a gracious and clever response to
this.
"The change is permanent, and thanks for the tip."

-- 
         sentience@pobox.com      Eliezer S. Yudkowsky
          http://tezcat.com/~eliezer/singularity.html
           http://tezcat.com/~eliezer/algernon.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:53 MST