From: GBurch1@aol.com
Date: Sun Nov 21 1999 - 07:51:50 MST
In a message dated 99-11-14 11:05:34 EST, I wrote:
> Science and technology can tell you the "is", but the scientific method,
> standing alone, cannot tell you the "ought". Ultimately, science and
> technology will provide us with a complete list of what we CAN do, but
we'll
> still have to face the question of what we OUGHT to do.
And the responses should teach me to never be too dense and epigramatic in
what I write here, or I'll live to regret it :-)
Skipping, in a sense, to the end of the discussion (but, as I hope to
explain, really not the end, but only an intermediate waypoint), J.R. Molloy
posted an extended quote from an article by EO Wilson published earlier this
year in the Atlantic:
"The Biological Basis of Morality"
http://www.theatlantic.com/issues/98apr/biomoral.htm
"A Scientific Approach to Moral Reasoning":
http://www.theatlantic.com/issues/98apr/bio2.htm
which, as it happens, I've previously endorsed to my friends as containing
very valuable insight into how we might finally be on course to develop a
truly rational ethical philosophy. I agree with almost everything Wilson
says there and in his book "Consilience" about how inquiry into the
biological basis of human morality can both illuminate what's worth revering
in biologically-based moral sentiments, and also how we might go about
developing more rational moral sentiments.
But Wilson's own thinking on these lines leads to what may be the limits of
at least a biologically-based scientific approach to morality. As I wrote in
my review of "Consilience" in Extropy Online magazine
http://www.extropy.org/eo/articles/gbcurrent.html
Wilson ultimately turns away from the prospect of transcending the human
animal BECAUSE he sees biology as the only possible basis for a "scientific
morality": He believes we should not transcend the human animal because it is
only as humans - with our particular, evolved ethical instincts - that we are
moral. Wilson's style of evolutionary morality is ultimately
backward-looking and provides little basis for a transhuman morality.
Now I do see that Wilson has in fact identified a solid basis for a rational
and "scientific" morality that can guide humans through transcendence of
their current biological form, and that is in his reference to game theory.
By stripping moral calculus to the bare essential of foresight and
intentionality, we can develop tools for crafting values that ought to
transcend the accidents of evolution. Only time will tell whether the kinds
of rules we seem to have distilled from "toy universes" in simple models will
be robust in the face of radically augmented intentional actors. Let's hope
so.
But to return to my perhaps incautious statement about "science" not
providing a basis for moral choice, I certainly didn't mean that we can't or
shouldn't be RATIONAL about moral philosophy. However, in our rightfully
high valuation of the scientific method, I think it's important that we not
fall prey to naive "scientism". The scientific method is a subset of
rational thought (and there definitely seems to me to be levels of
applicability of the classic scientific method of hypothesis, experiment,
repeatabilitiy and falsifiability) but reason is not only those things. It's
not as if our approach to reality has to be "full-blown science or nothing at
all."
Even within the natural sciences, we see a spectrum of applicability of these
tools, with physics at one end and, say, evolutionary biology at the other.
While it is certainly possible to do "experiments" by predicting that, say,
fossils of a particular type ought to be found in geological formations of a
particular kind (and then looking to see if they are indeed found there), the
number of ecological variables impacting living systems makes
narrowly-focused "experiments" difficult.
Human behavior in society presents even more difficult problems in applying
the classic formulation of the scientific method. The number of variables is
much higher than in any physics experiment and carefully-controlled
experimental activity is circumscribed by ethical limitations on just what
can be done to people. And consideration of moral issues in any really
meaningful way is always a question of humans in society. Thus "science" per
se - at least with the tools currently available -- makes only an incomplete
guide to moral philosophy.
With these comments in mind, I'll address some of the posts in this and
related threads:
In a message dated 99-11-14 13:14:46 EST, CurtAdams@aol.com wrote:
> I haven't read Sokol directly, apart from what is published here. I didn't
> perceive him claiming that humanities were bad, just that fuzzy thinking
and
> ignoring facts hurts. You *could* do scientific and scholarly work in
> "cultural studies"; I thought the point was that the editors of that
> magazine
> were not.
I think that's a correct statement of a healthy reaction to Sokal's
"experiment". Close readers will know that I never miss a chance to take a
shot at postmodernist hogwash. But my criticisms of postmodernism shouldn't
be mistaken for a belief that pursuit of the traditional humanities isn't
worthwhile. Use of at least some of the traditional methodologies employed in
the humanities - especially history, tempered by a deep respect for the
scientific method as the most certain guide to knowledge - where it can be
employed, is still a deeply important part of participation in the
development of culture.
In a message dated 99-11-14 17:34:05 EST, neosapient@transtopia.org (D.den
Otter) wrote:
> What's up this ethics debate anyway -- I thought
> that rational ethics based on enlightened self-interest were
> the default in transhumanism?
That's probably a correct statement at a high level of generality, but it
doesn't really address the ultimate questions of moral philosophy. I think
the essay by Greg Johnson that Daniel Ust posted in this thread on 99-11-17
06:42:11 EST sets out very well how pointing to "self-interest" alone doesn't
provide real answers to ethical questions. This is hinted at by the very
term you use, "ENLIGHTENED self-interest": A lot of complexity and many hard
questions are subsumed under the term "enlightened".
Two posts illustrate what I consider to be an excessive "scientism" regarding
social and philosophical matters:
In a message dated 99-11-15 14:52:18 EST, jr@shasta.com (J. R. Molloy) wrote:
> From: Anders Sandberg <asa@nada.kth.se>
> > ...if you like blue
> >and I like green, which color do we paint the building in? We have to
> >do politics to resolve that dispute, be it a compromise or something
> >else.
>
> Leave it to the decision making authority/architect.
> Non-politicians decide what colors to paint buildings everyday.
I understand and agree with Anders' point, although perhaps the example given
isn't a good one - after all, the market can sort out architectural
questions. (There's an example in my neighborhood. An office building was
constructed about ten years ago that had a hideous green, textured concrete
surface. I remember thinking when it was being finished, "What an eyesore!
Who would lease space there?" Well, apparently few people did, because
eventually the building was completely stripped of the offensive coating and
re-sheathed with an attractive and creative mix of different glass and steel
textures. It now appears to be fully leased-up.)
In a message dated 99-11-16 15:07:09 EST, jr@shasta.com (J. R. Molloy) wrote:
> A scientific approach to making decisions together with other people,
acting
> in
> the public sphere would, I imagine, eliminate biases which interfere with
> obtaining the most successful decisions. Cutting politics ("A strife of
> interests masquerading as a contest of prinicples. The conduct of public
> affairs
> for private advantage." --Bierce) out of the loop, would meet with the
> strongest
> resistance from powerful politicians (whom the scientific method would also
> prune from the decision making system).
>
> In this scenario, neither you nor I would decide which color to paint the
> building. We'd both agree to let the expert system decide.
> Eventually, we'd let the SI make all these kinds of decisions. (Didn't we
> have
> this conversation once before?)
This smacks of the kind of naive "scientific" approach to society that one
finds in Marx and his followers. Substitute "the vanguard party" for "SI",
and you have the kind of elitist, "we know what's best for society" mentality
that lead inevitably to Bolshevism. I honestly can't imagine what process
you're picturing this SI would engage in to make a "scientific" decision.
Some kind of balancing of everyone's utility functions based on perfect
knowledge of their internal brain states? This sounds like one is merely
substituting "SI" for "god" and "scientific decision making" for "paradise".
Here I note a personal response to comments of this kind similar to ones I've
seen from other experts when people address the fundamentals of their
expertise from what appears to be a lack of basic background in the area.
With real respect to you J.R., I'm thinking of the frustration I've seen
expressed by, say Robin when people talk about economics or by Natasha when
people talk about art without having looked into the basics of the subject.
I don't by any means wish to discourage "amateur" discussion (in fact, it's
one of the things I value most about this forum), but history and politics
are DEEP subjects. Yes, there's a lot of hogwash that gets propagated in
these fields (just as there is in economics and art), but it's not all
hogwash, by any means. For good or bad, rigorous science ISN'T yet possible
in the study of human history, politics and law (and may never be), but that
doesn't mean that one can't find some truth, if only a relative and
contingent truth. Navigating these realms is fraught with the danger of
falling into the traps of total subjectivism on the one hand and simplistic
scientism on the other. As it happens, history teaches that both of these
pitfalls lead to the same result: Totalitarianism.
In a message dated 99-11-14 12:57:42 EST, jr@shasta.com (J. R. Molloy) wrote:
> I suspect that deep down, everyone really agrees on "what we
> ought to do." We just disagree (sometimes) about how to do it.
This is an example of what I'm talking about. It's probably true that deep
down, all humans share a few basic needs and desires. But a study of history
will show that the devil is definitely in the details. The disagreements
about "how to do it" have given rise to a diversity of cultural phenomena as
rich as anything one finds in biology or ecology. History also teaches that
people seeking simple solutions to questions of "the good" usually spill a
lot of blood.
I think Damien has expressed well the complexities involved in considering
ethical and social questions:
In a message dated 99-11-18 23:02:13 EST, d.broderick@english.unimelb.edu.au
(Damien Broderick) wrote:
> To act prudently implies several subsidiary factors: that you have an
> adequate model of your own current and long-term needs, desires, aversions,
> etc; that you (can and do) know with some accuracy - ie, again, you can
> model accurately - how the brute or unintentional world works; that you can
> fairly effectively model the complex interplay of the brute world and other
> intentional critters like yourself.
>
> Failures in any of these subordinate competencies will tend to compromise
> the effectiveness of how well you understand the likely impact of your
> actions. And chaos limitations mean that even the best stocked mind and
> heart is going to make errors in modelling anyway. But we adjust a tad and
> start again.
>
> None of this denies us the opportunity to reassess our goals; in fact, it
> almost ensures that we must, from time to time, as more accurate
> information about the world and ourselves is gathered, and as our
> theoretical models are improved. I don't see any real gulf between Is and
> Ought in that sense, except that we need to keep ourselves informed on what
> actually *Is*, so as to optimise our chosen *Oughts*.
Or, to put it in terms of the question of "scientific" knowledge of social
phenomena, people in society are like weather, only more so. The fact that
we have far from perfect models and that chaotic processes undermine the
possibility of perfect knowledge doesn't mean that we can't have a science of
meteorology, it just means that we have to understand the limits of
prediction. Transhuman technologies will, I suspect, only make the matter
more complex and less predictable because, as I discuss above in connection
with Wilson's "naturalism", we're looking to make the regime in which moral
actors can exercise intentionality WIDER, not more narrow.
Greg Burch <GBurch1@aol.com>----<gburch@lockeliddell.com>
Attorney ::: Vice President, Extropy Institute ::: Wilderness Guide
http://users.aol.com/gburch1 -or- http://members.aol.com/gburch1
"Civilization is protest against nature;
progress requires us to take control of evolution."
Thomas Huxley
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:49 MST