From: Dan Fabulich (daniel.fabulich@yale.edu)
Date: Fri Mar 03 2000 - 11:47:41 MST
> >The fact that I'm going to believe X *normally* doesn't provide any argument
> >at all for believing X now, but the fact that I'll believe X at the end of
> >inquiry *does* provide me with reason to believe X. ... your scientific
> >process ... argument ... it's a dismal failure if it fails at all, because
> >*nothing* like the argument from the best answer applies his cloudier crystal
> >ball. ... [the same claim is made several more times in other words]
>
> I think this is just wrong. The difference between your beliefs now and at
> the "end" of inquiry is made up of a bunch of little differences between
> nearby points in time. Anything that informs you about your beliefs at any
> future time implicitly informs you about your beliefs at the "end". I could
> prove this in a Bayesian framework if you would find that informative.
Go ahead. Right off the bat, I posit that you can't provide me with such
a proof at all if I'm going to have an infinite number of logically
independent thoughts about ethics (as at an Omega point, if any).
Before you begin, however, take note of an interesting fact. (You jammed
this into [argument] brackets, without, as I see it, answering the point.)
The probability that I'll change my beliefs in light of what your
computation tells me provides an upper limit on the certainty of your
prediction; in particular, if it is 100% likely that I will change my
beliefs in light of your report, your report can have no certainty at all,
since even a process which attempts to predict what beliefs I'm LIKELY to
have will have to revise its judgment based on how I'll change my beliefs
in light of its report. So its computation will have to take into account
the result of its own computation before yielding its answer. This
argument holds for arbitrarily weak certainty on the part of the computer;
any certainty greater than "no certainty whatsoever" suffers from this
problem if the chance that I'll change my beliefs is unity.
Let me add that I'm not suggesting changing my beliefs based on what your
computation tells me *simply* to be difficult, but because I'm hoping that
it will give me the *right* answers, and that, upon encountering them,
I'll want to change my beliefs to those straightaway.
But consider ANOTHER interesting fact. I'm either going to change my
beliefs in light of your computation's report, or I'm not. If I'm not,
then the computation hasn't told me anything new. (Nor will the fact that
I'll never change my belief that X give me any additional reason to
believe that X.) But if I AM going to change my beliefs, in the hope of
making my beliefs converge on the right answers, then the computation
can't tell me anything with any certainty whatsoever.
All this is just another way of saying that I can't know what I'm going to
decide until I decide to do it, that I, as a computer, can't predict the
results of my own computation.
> Let's consider a physics analogy. [...] You want the mass of the
> particular atom you have in mind, and no you aren't going to tell me
> which one that is.
This is a totally faulty analogy. Totally unlike this example, the
question of what ethical beliefs I'm going to have is entirely well-posed
and completely verifiable after the fact. (So long as we take
functionalism to be right, and I do.)
> The analogy should be obvious: Your exact moral beliefs at any point in
> time will be determined by your brain state, some of which will be
> unobservable by the rest of us. That doesn't mean that we can't learn
> most everything there is to know about morals by learning how brains
> evolve their moral beliefs.
Look, I'm sure this machine could tell me a lot about you and it might
tell you a lot about me, (it would be quite a feat if I ever came to know
what you're going to do as well as you did!) but the whole point of this
machine is to tell ME about me (or us about us) so that we can get the
machine to tell us what our ethical beliefs will/should be. That's what
this process can't do.
-Dan
-unless you love someone-
-nothing else makes any sense-
e.e. cummings
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:27:09 MST