From: John Marlow (johnmarlow@gmx.net)
Date: Sat Apr 28 2001 - 12:42:42 MDT
#Hi.
On 28 Apr 2001, at 8:10, GBurch1@aol.com wrote:
>
> >
> > #I'll give you a big part of it right here: They're opposed to the
> > artificial creation of life for the express purpose of profit or
> > experimentation/torture or spare parts. As am I--for which I've
> > recently been labeled a neoludd, moron, technophobe, etc. etc.
> > (really quite an entertaining accusation, but I digress...). Were I
> > these things, I'd support this bill, which I do not.
>
> Expressed as you have here, I'd say this position could fairly be
> characterized as "luddish". You say that the "artificial creation of life
> for the purpose of profit or experimentation … or spare parts" is wrong (I
> leave "torture" for later comment). Without a clearer definition of the
> terms "artificial" and "life", your position seems indistinguishable from
> that giving rise to the proposed law under discussion.
#Okay; clearly there are positions somewhere reasonable. As a
starting point, let's say conscious, self-aware life. In fact, if
you're talking public acceptance, you might be able to confine the
definition to the warm-and-fuzzies. I'm not saying the correct thing,
morally.
Is culturing bacteria
> "artificial"? It is certainly "life". What about a simple human tissue
> culture?
#Given the clearer definition you asked for, tissue culture is fine.
Bacteria are fine.
Is doing so for the purpose of developing a new drug, which one
> then sells for economic gain, doing this for "profit" in a way you would
> oppose?
#Not at all. You start torturing monkeys, I have a problem with that.
growing entire humans for parts, more of a problem. Cloning a human
for parts and coding the clone to be anencephalic--THERE's a dilemma!
This would *seem* to avoid the self-aware, conscious def--but does it
really? I don't know.
How about a complete human organ?
#No prob. Grow it in a pig, though, and kill the thing to get it out--
and we have a problem. Can you see the consistency of the view
expressed? It's a middle ground that's far closer to the views
expressed here than to the neoludds. I believe most people, if they
thought about the matter, would find they occupy the middle ground. A
straight "our view and everyone else is a neoludd techiephobe moron"
stance guarantees failure and ridicule by the masses as well as
intelligent critics.
>
> As for "torture" - is an experiment in which cultured pathogenic bacteria are
> exposed to antibiotics "torture"?
#Doesn't seem so. You could also classify this as self-defense; you
did say pathogenic. No big deal.
If one ascribes moral subjectivity to the
> bacteria, then it would certainly seem so.
##Samesame.
What if the bacteria are
> transgenic? Sounds "artificial" to me.
##Still bacteria, but the path may get murky as things go on.
>
> Obviously there are degrees and spectra of "life" in a moral sense.
#Absent communing with bacteria to learn their thoughts, if any, It
would seem so!
>
> > > > This sort of thing serves as a reminder that what we take as obvious,
> > the
> > > > majority of people think is ~really far out there~.
> >
> > #Glad to see someone realizes this.
>
> As someone who works in the "mainstream" world of law and governance, I'm
> well aware of how "really far out there" some of our ideas are. So were
> Baruch Spinoza's and, for that matter, Thomas Jefferson's (at least to many
> people) in their own times.
#Sure. Point is, this COULD be the future--but isn't necessarily. At
the very least, public perceptions will influence timing. The greater
the opposition, the greater the dealy in implementation, and the more
who die before that (hopefully) becomes unnecessary.
>
> > > It is also a sign that we need to make more inroads in mainstream
> > > thinking. It is not enough for people to have heard of nanotechnology or
> > > even know what it is, they better be able to integrate it with an
> > > ethical discourse.
> >
> > #Couldn't agree more--yet when morality/ethics issues are raised
> > here, they are ignored at best, dismissed or ridiculed at worst. 'Tis
> > troubling, to say the least. The term "techno-cheerleader" (coined
> > by..?) is not at all inappropriate.
>
> John, this is simply untrue. *I* coined the term "techno-cheerleader" (at
> least here) and for good reason. But not to characterize the many fine minds
> who have engaged in many extended discussions of morality and ethics here.
#Well, as I said, comment refers to the highly vocal minority. Point
'em out; I'd love to see 'em. (Seriously.)
> Rather, it was to make a caricature of how we COULD be misunderstood, if
> those discussions were ignored. It seems you're ignoring them. Just because
> every single message in every single thread doesn't contain a moral caution
> doesn't mean that most of us aren't well aware of the deep moral issues
> raised by transhumanism.
In fact, extropianism is, in many respects, a
> statement of moral principles in response to the prospect of transhumanism.
#That's the impression I had initially.
>
> > #There is ***in general*** no sense of balance, of restraint, of
> > forethought, caution, or consideration of consequences.
>
> I just don't see how you can reach this conclusion. Consider the lengthy,
> recurrent discussion of social transparency and privacy which arises almost
> every time someone mentions a new development in the technologies of
> information gathering.
#I'm talking about life, not info proc.
Consider the inevitable moral discussions which
> accompany consideration of uploading and cognitive transformation
> technologies. And the long-running discussion of "augmented" versus
> "synthetic" minds is nothing if not a protracted exercise in "forethought,
> caution and consideration of consequences."
#Okay, here's the biggie for me--and maybe it's been covered in the
dim and distant past: Loss of humanity. Impression given is, you (the
collective you) don't know what the next step (toward posthumanism)
is and you don't care; you just want to take it. Don't care what
you'll be, just want to get there. Don't consider that when you do,
you won't retain human values. That's reckless.
>
> > Pointing this
> > out draws ad hominem attacks--first refuge of those unable to support
> > their positions with reason.
> ...
> I'm not going to get drawn into commenting on the comments about the comments
> that have been made about meta-meta-meta observations, but I will simply note
> that your statements above seem to me to be unsupportable generalizations.
>
> > #It would be very easy to do a dismissive writeup on the movement,
> > presenting the views of the most hotheaded and vocal individuals as
> > being typical and representative (taking great care to note their
> > affiliations), and print it somewhere prominent. I would not do such
> > a thing--both because it would (I hope) be inaccurate and also
> > because I agree with many of the concepts espoused by transhumanism--
> > but others may well.
>
> Saying that you could hurt a group of people by picking out the least
> appealing members of the group and then depicting them as representative is
> true but, with respect, so what?
#I'm warning that someone will eventually do this--that some sort of
etiquette is going to have be enforced on the public list or you're
going to run into trouble. Yeah I know that flies in the face of the
freedom-to-post-anything stance, but there it is.
You could do the same thing with any
> congregation of people, from the most "mainstream" political party to the
> group of people gathered at the coffee machine at the office. Sure, we're
> more vulnerable to such tactics, because our ideas are challenging to the
> great majority of people.
#More I think because most people are unfamiliar with you, haven't
thought about these things. If the first thing they hear is some
ranting portrayed as typical, that's going to form an initial
impression that's hard to overcome.
I'm certainly aware of this and have on occasion
> spoken strongly against ideas I think have the potential for damaging
> transhumanism by confused association, especially racism and eugenics. But I
> wonder what you would propose as an antidote to the problem you perceive; a
> "speech code" in which it is mandated that every discussion of advanced
> technology must be accompanied by a "moral impact statement" less we be
> mischaracterized as mad scientists?
>
> > #Transhumanists themselves do their cause far more damage than any
> > neoluddite opponent could hope to manage. This was the meaning of my
> > "not ready for prime time" comment. The general observations made
> > with that comment explain why. The Principles and the practice do not
> > coincide. As I said then, this is a pity.
>
> I won't ask you to "back this up" with examples - you could only do so with
> an exhaustive analysis of the list's archives. But I will ask that you
> suggest some solution to the problem you perceive.
#Create a rational and comprehensive morality/ethics policy, post it,
reference it, keep it up to date and STICK TO IT. Anyone
mischaracterizing your aims can be referred to it. Media will be
compelled to check it out because it will be easily located and
highly relevant. THEN if they portray some nutball as a typical TH or
extrope, THEY (the media) will look like the flakes because anyone
can see that they've ignored your stated policy. Which of course
you'll point out at every opportunity.
#How does that sound?
#One note of caution: You're going to have to enforce adherence to
it, or its purpose will be destroyed. Folks who don't agree with it
ain't on the official party roster.
jm
>
> Greg Burch <GBurch1@aol.com>----<gburch@lockeliddell.com>
> Attorney ::: Vice President, Extropy Institute ::: Wilderness Guide
> http://www.gregburch.net -or- http://members.aol.com/gburch1
> ICQ # 61112550
> "We never stop investigating. We are never satisfied that we know
> enough to get by. Every question we answer leads on to another
> question. This has become the greatest survival trick of our species."
> -- Desmond Morris
>
John Marlow
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:07:19 MST