From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Tue Nov 02 1999 - 22:37:26 MST
'What is your name?' 'Eliezer S. Yudkowsky.' 'IT DOESN'T MATTER WHAT
YOUR NAME IS!!!':
> The internalist theory of proof is false. Given time and money, it
> would actually be fairly easy to set up a situation where the most
> rational explanation is the false one, a situation where Occam's Razor
> doesn't work.
OK, gotcha. Now that I better understand what you're saying, I realize
that not only is this not a refutation of internalism, Putnam covers a
case *exactly* like this in his book. I'll quote, since I don't feel
inspired this evening.
--------
"To reject the idea that there is a coherent 'external' perspective, a
theory which is simply true 'in itself', apart from all possible
observers, is not to *identify* truth with rational acceptability. Truth
cannot simply *be* rational acceptability for one fundamental reason;
truth is supposed to be a property of a statement that cannot be lost,
whereas justification can be lost. Thee statement 'The earth is flat'
was, very likely, rationally acceptable 3,000 years ago; but it is not
rationally acceptable today. Yet it would be wrong to say that 'the earth
is flat' was *true* 3,000 years ago; for that would mean that the earth
has changed its shape. ... "
"What this shows, in my opinion is not that the externalist [here, he
means metaphysical realist] view is right after all, but that truth is an
*idealization* of rational acceptability. We speak as if there were such
things as epistemically ideal conditions, and we call a statement 'true'
if it would be justified under such conditions. 'Epistemically ideal
conditions' of course, are like 'frictionless planes': we cannot really
attain epistemically ideal conditions, or even be absolutely certain that
we have come sufficiently close to them. But frictionless planes cannot
really be attained either, and yet talk of frictionless planes has 'cash
value' because we can approximate them to a very high degree fo
approximation."
---------
In your example, in which you have a windfall at the same time as you
perform an act of charity, it is rationally acceptable to believe that the
windfall was a coincidence. However, you're right, it could well be the
case that the windfall was NOT a concidence. If we accept internalism,
then all this means is that, under ideal epistemic conditions, (for
example, you being aware of this fellow who tries to reward people for
acts of charity,) it would NOT be rationally acceptable to believe that it
was a coincidence.
According to my current favorite model of my experiences, "ideal epistemic
conditions" is the situation in which you know everything you can know
about the "real world." In this case, the words "real world" refer to a
theoretical construct in my mind, which in turn is part of a theory which,
I find, correctly explains/predicts my experiences.
Just as it WAS rationally acceptable to believe that the earth was flat
back when nobody knew better, so in the imagined scenario would it be
reasonable to believe that your finding the book was sheer coincidence.
This does not mean that the above claims are true, even under an
internalist "idealized justification" view.
Hal pointed out that I may have misapplied the doctrine in the case of the
Matrix Hypothesis. My whole point was to show that if you would act as if
the Matrix Hypothesis were false, even if some proof of it were presented
to you, then the Matrix Hypothesis would be the sort of Hypothesis (and
there are very few of these, I think!) which are not rationally acceptable
under any circumstances. Some examples of hypotheses like these are: "I
do not exist," "the real world does not exist," "my actions have no impact
on the real world or on myself," etc. If the Matrix Hypothesis is not
acceptable under any circumstances, then it is not acceptable under ideal
epistemic conditions, and thus it is false.
(I feel quite certain that the claim: "I am living in a Matrix world, but
the creators of the Matrix have never intervened and never will" is false
in the manner I've described.)
BTW, I'm almost inclined to say that you can't HELP but install a
world-view like this into an AI, since the anti-realist/internalist theory
of reference is EASILY explained to an AI. It's so easy, in fact, that
you may accidentally wind up coding your AI in an internalist way when you
had MEANT to code it to be a metaphysical realist!
Under anti-realism, the internal symbol "ball" simply refers to the
internal theoretical construct of a ball. Any theory which incorporates
this term must correctly explain/predict the AI's experiences, which are
*also* internal. Adopting a realist approach would require you to CODE IN
an idea of the real world, and explain to it how the internal symbols
refer to the real world out there. Unfortunately, this approach, as I've
argued, must necessarily fail, unless you're going to code in a "because I
say so!" theory of reference. And, of course, if you accidentally code up
the words "real world" to refer to the AI's internal theoretical construct
of the "real world," then you've accidentally coded an anti-realist AI.
As Putnam put it: "Since the objects [of reference] *and* the signs are
both internal to the scheme of description, it is possible to say what
matches what. Indeed, it is trivial to say what any word refers to
*within* the language the word belongs to, by using the word itself.
What does 'rabbit' refer to? Why, to rabbits, of course! What does
'extraterrestrial' refer to? To extraterrestrials (if there are any)."
I presented an example similar to this when I said that the character 7
simply referred to that theoretical invention of ours, the number seven.
So with the real world, balls, horses, etc. Each of these words refers to
internal theoretical constructs; we judge the theories based on their
explanatory/predictive value.
-Dan
-unless you love someone-
-nothing else makes any sense-
e.e. cummings
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:41 MST