Re: Censorship

From: Hal Finney (hal@finney.org)
Date: Mon May 27 2002 - 23:19:31 MDT


Lee writes:
> We all exhibit various levels of disapproval when reading
> posts. The question is, what is the appropriate way to
> express our disapproval, or feelings, or opinions, and
> what terms should be used?

This is a good question. We have seen many problems arising from people
expressing disagreement in a non-constructive way.

But let us not forget the other side of the coin. We all feel various
levels of approval and agreement as well when reading posts, and it's not
always obvious how best to express that, either! In fact, I would suggest
that people seem to find it easier to express disapproval than agreement
in online forums; the opposite seems to be true face to face.

> In the 1970s and '80s, several times a Vice President of
> the United States strongly condemned the opinions coming
> from some quarters. Because of his position, and because
> of the language he used, cries of "censorship!" could be
> heard. It's similar in many ways, though not all, to
> recent Extropians cries of "censorship!".

Asking what is or is not censorship is a semantic question and there is
no unique answer. What we really want to know is what policies regarding
controversial issues will best serve our needs. It's a hard question;
at one extreme we have dogmatism and rigidity, but at the other we have
chaos and absence of focus. Clearly we need some middle ground. It is
a topic very worthy of discussion.

> So, in conclusion, I would ask those crying "censorship!" to
> rephrase their expressions of dismay. I would also appeal to
> those protesting a certain discussion topic to attempt to provide
> reasons why a topic should not be discussed, beyond "it makes
> me sick", or "it's counter to Extropian principles to discuss
> that".

I agree, and I'd break it down like this. There seem to be two
issues here. The first is the consensus we have about what topics are
appropriate for discussion on this list. This is after all the Extropians
list, and for that title to have meaning then the name should impose some
limits on the range of topics. There are other lists for other topics.

The second question is the need to provide at least a threshold argument
that a topic is off topic or otherwise inappropriate, by whatever
guidelines for topicality we follow. Rather than just saying that
something is inconsistent with the Extropian principles, it should be
necessary to have at least a sentence or two explaining it.

Beyond topicality, it's true that we are all human beings and some topics
will be distasteful or disgusting. While this may put practical limits
on the range of topics we can effectively discuss, I don't think it is
something that we want to elevate to a strong moral principle.

I'm not sure everyone understood Wei Dai's reference to the "Wisdom
of Repugnance". This is a quote from Leon Kass, currently head of the
U.S. President's council on bioethics and perhaps the most prominent
opponent of Extropian concepts. He has a whole article on "The Wisdom
of Repugnance" at
http://www.princeton.edu/~wws320/Second%20Pages/06Reprotech/Cloning/Wisdom%20of%20repugnance.htm.
It's a pretty appalling article:

: "Offensive." "Grotesque." "Revolting." "Repugnant." "Repulsive." These
: are the words most commonly heard regarding the prospect of human
: cloning. Such reactions come both from the man or woman in the street and
: from the intellectuals, from believers and atheists, from humanists and
: scientists. Even Dolly's creator has said he "would find it offensive"
: to clone a human being.
: ... In crucial cases,... repugnance is the emotional expression of
: deep wisdom, beyond reason's power fully to articulate it.

I think we should be able to agree that it would be a mistake to adopt
the wisdom of repugnance as a moral principle for guiding our discussions.
It would be validating some of the strongest anti-Extropian arguments.

At the same time, I understand and recognize the difficulty many people
have when dealing with emotionally-loaded issues. It seems to me that
a reasonable compromise is to accept that there are limits to what we
can discuss, but not to glory in it. We are after all limited in many
other ways; in intelligence, in memory (look at all the misattributions
and errors we have seen recently), indeed in wisdom. All of these are
limitations that we hope to transcend, and in the same way, we might
hope that in the future we will have mastery of our thoughts to the point
that we can think calmly even about issues which make us upset today.

Pending that Extropian future, it seems to me that there is a very
simple and practical way that we can discuss many of these controversial
issues without triggering the emotional reactions that people have found
so upsetting. That is to recast the problem in a more abstract form.
We are going to be dealing in future decades with artificial life forms
about which we have few emotional instincts. Issues that carry heavy
emotional baggage with regard to human beings can be discussed much more
easily with regard to artificial life forms.

This is particularly relevant for some of the more extreme hypotheticals
we have been considering, such as treatment of children. Clearly no
Western society is going to permit some of the policies which have
been proposed, within the next few decades. However it is entirely
possible that in that time frame we may create artificial intelligences,
uplifted animals, or other synthetic life forms which have human levels
of intelligence.

What policies should we follow for the management of immature AIs,
when they are in a pre-sentient state? Could they be owned? Could a
pre-sentient AI legitimately be destroyed if it does not appear to be
developing properly? Expressing questions in this form is going to cause
much less of an emotional reaction than asking about human children.
And really it is a more practical question to discuss, because the
policies regarding these future issues are not yet established, while
policies regarding human children have centuries of precedent.

Hal



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:26 MST