From: Mike Lorrey (mlorrey@datamann.com)
Date: Sun May 26 2002 - 11:30:16 MDT
Harvey Newstrom wrote:
>
> On Friday, May 24, 2002, at 06:23 pm, Mike Lorrey wrote:
>
> > I have never discussed anything else. It is you who have twisted my
> > statments, made unsupported accusations, and claimed that I am a
> > proponent of something I am not. You already apologized once for it, but
> > here you are, doing it all over again.
>
> Mike, I am sure that I have misunderstood many of your points. Your
> viewpoints are so alien and different from my own, that I have no doubt
> that I do not understand it accurately. If I misstate your viewpoints,
> I assure you that it is my own misunderstanding of your views. I would
> never deliberately misrepresent your views to make you look bad. I will
> try to be more careful when interpreting your views.
Thank you. My posts were meant to ask you, and others to consider which
is more moral and better for extropy. The moral arguments I've seen you
and others make in the past on the list seem to deny any sort of Natural
Law basis for morality, instead relying, it seems, on a more utilitarian
argument of what does the least harm. In the rhetorical questions I've
posed, what is more moral, based on your apparent 'least harm' standard:
killing someone who is causing or is intending to cause millions or
billions of deaths, or passively allowing millions or billions of people
to be killed while only taking whatever measures you see fit to save
your own person?
If you think taking the interventionist stance is wrong, why, if it is
in actuality the proper 'least harm' choice, do you think so? Is it so
important to take no action just because a person has not actually
committed an overt, direct act of violence? Why? At what point do you
think we will pass a threshold condition where an interventionist stance
is justified, and is this condition before or after a point where it is
too late to really do anything about it?
We are, of course, dealing with a scheduling problem. Resource planners
looking at world resources and human population see a consumption peak
around 2050 or thereabouts. This peak period I see as being the last
possible date at which we can expect to see a technological singularity
with any possible positive outcome, and the closer to it that the
Singularity occurs, the less extropic I see the Singularity becoming
(i.e. more and more like the Borg or the Matrix). We think the
Singularity will occur sometime between 2020 and 2030 at current rates
of technological change and acceleration. this means that we can
theoretically afford a legalistic interregnum of somewhere between 10-30
years where luddites could control the political landscape and restrict
technological development by statutory means, thus delaying the
Singularity from occuring. If the interregnum lasts any longer, this
will result in a future where the worst prophesies of the luddites will
be fulfilled, or else mankind will be forced by lack of resources to
devolve and depopulate to pre-Industrial Age levels.
Ten to thirty years is a decently broad window (assuming our predictions
are accurate) to allow for one to seek the least damaging and overt
strategies to start with, only resorting to worst case tactics as such
passive efforts prove ineffective or untenable in ending the Luddite
Interregnum.
The problem, of course, is that Luddite tactics seem to be designed to
have self fulfilling results, and Luddites are quick to blame
technophiles for failures or deny outright that a tactic is a failure.
It took seventy years to end the Communist Interregnum in Russia, and
that was with more than half the world supporting anti-communist forces.
If the Luddite movement gains control of all of the most industrialized
polities, then there will be no real sanctuary for technophilia in the
world, and it would spell the end of modern civilization and no
Singularity at all, merely the creation of a new Atlantis legend in a
newly pastoralized world.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:23 MST