From: Daniel Fabulich (daniel.fabulich@yale.edu)
Date: Sat May 16 1998 - 14:07:55 MDT
On Sat, 16 May 1998, Ian Goddard wrote:
> IAN: Your point is that function = utility.
> But we could have two functions, (1) a truth
> fuction that maps physical reality, and (2) a
> Big Lie function that masks physical reality.
> Both are useful to some, but only one is true.
> If "utility first" is my directive, I can choose
> either; if "truth first" is my principle I can
> choose only one. This limits my options, but
> as a rule it maximizes social outcomes, which
> is utilitarian, but utility comes second.
I think you misunderstood me. My point was simply that your reasons for
accepting a "truth-first" principle were utilitarian in origin: that
people would be better off if we all accepted "truth-first." Well, if we
WOULD be better off thanks to "truth-first," then utilitarianism demands
that we act according to "truth-first;" thus we would accept it BECAUSE of
utilitarisnism, not in spite of it.
> What is more, since there is only one reality,
> yet myriad claims about it, the Big Lie function
> will be useful to most people and thus the demand
> for the utility of BLF will be greater than for
> TF, so if our directive is "whatever works to
> promote your idea," Big Lies will win the day.
But, as I think we're both agreed, Big Lies are BAD for us in the long
run. Therefore, according to utilitarianism, we should avoid Big Lies.
>
> When I apply a "truth first" principle based
> up a scientific defintion of truth, I submit
> to a "higher authority" that will judge my
> ideas (about any topic) accordingly. If I
> apply "utility first" then it's whatever
> I can get away with to promote my ideas.
>
How about the benevolence principle, an invention (I think) of JS Mill?
"Will this action which I am choosing create the most happiness for the
largest number of people out of all other actions I might choose?"
>
> ><thought experiment> Suppose you are studying an important effect in
> >quantum mechanics, but one which can be put off until later without
> >significant losses in utility. Then, you look out the window and you
> >see: A Drowning Child (tm) [the philosopher's favorite play toy!].
> >You could save that child now and study quantum effects later,
> >OR you could just ignore the child and continue the pursuit of the
> >one-to-one truth function. Despite the fact that utilitarianism demands
> >that you save that child, truth-first demands that truth comes first, and
> >utility second. You ignore the child, and finish observing your quantum
> >effect before even considering saving him/her. </thought experiment>
>
>
> IAN: An interesting dilemma, but I think it
> may be a false dilemma. Maybe I cannot swim,
> and I have no rope, or just don't care, so
> saving the drowning child is not useful to
> me, afterall, that kid was a real nuisance.
Uhm... SINCE this is a thought experiment, I can always tweak it. I
should have mentioned that you are ABLE to save the child, if you decided
to, at a trivial loss of utility to yourself. (This was actually part of
the reason why I mentioned the "Drowning Child (tm):" ehtical philosophers
REGULARLY use this as a thought experiment: you could either save the
child or do something else. Which do you do?)
>
> So it doesn't follow that utility dictates
> that I stop my work and save the child.
It does if you CAN save the child, and if saving the child would result in
minimal utility loss for you.
> But if I have an axiom, "All human life is
> sacred," and I see a life in eminent peril,
> I say "I believe that axiom is true, thus
> I must stop my work and save that child."
> It seems to me that the act of stopping my
> work to save another must rest on a truth.
>
What you're talking about is extensional equivalence, which is what
happens when two separate ethical theories require the same actions.
Interesting questions arise when two ethical theories are COMPETELY
extensionally equivalent, but use different premises and arguments to
reach their conclusions. However, as I think thought experiment can
show, truth and utility are NOT completely extensionally equivalent, and
when they disagree, utility ought to win out.
> IAN: It is TRUE that Germany would be better off
> if they did not start WWII and commit mass murder.
> It was useful for the Nazis to mask that truth,
> for their goals were not making Germany better
> but enemy extermination and global conquest.
OK, let's stretch the imagination for a moment and pretend that the reason
Naziism happened was because, and only because, they thought it would make
the people better off. Otherwise, you're not criticizing utilitarianism,
but something more akin to nationalism, which we both agree is a silly
moral premise.
If Germany WAS acting solely to make the German people better off, then we
can see clearly that they made a mistake in that regard: Germany was made
worse off by WWII, not better. As a definition of utility, if something
makes you worse off, then it is not compatible with your utility.
Therefore, I make the trivial conclusion that mass murder was NOT useful
to Germany, because it caused WWII, which went quite badly for Germany.
Utilitarianism does NOT demand, nor could it in any way be construed to
support, Naziism. (It may have been used at the time as rhetoric, but we
can now see that people who argued that Naziism would make the Germans
better off were wrong, and that utilitarianism was NOT on their side.)
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:49:05 MST