Is Extropian-ism doomed? was Re: Reliberion

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sat Mar 17 2001 - 02:07:38 MST


T.0. Morrow promoted a nice interesting philosophy (quasi-religion)
to the discussion for the framework in which to consider how to
run simulations, what you can do in them and how to "hold" the
simulation directors and producers.

I'm not going to go into that because I'm not sure I understand it
well enough.

However, his discussions sith Samantha placed side by side with
Eliezers Sysop discussions do raise some interesting issues.

If we assume that extropian principles are those striving for
increasing complexity and sophistication (this may be vulnerable
to attack, but lets assume it for a minute) -- and *if* we assume
that conscious entities are inherently valuable and you cannot
infringe on their right to self-determine whether or not they should
continue to exist do we not have an inherent conflict?

I.e. if you exhaust all the possibilities (read use up all the matter
to store the memory states of the all the conscious entities that have
ever been created and who cannot be destroyed) -- does not the moral position
that you "cannot" erase those states imply that you cannot fullfill the
extropian prime directive?

The fundamental path through which evolution operates is to wipe the slate
sufficiently clean from time to time to allow it to go off in a completely
different direction. This is not allowed in a "moral" SysOp world interested
in protecting everyone's "self"-interests. (I doubt the SysOp has a
self-destruct sequence tied to on a random interval timers).

Now, from an extropian perspective, you desire that the phase space be
explored as completely as is feasible. Accepting that, you realize that
sooner or later your sim will have had its run and your bits are up for
reuse. From a truly extropic perspective this is fine with you.

Put another way, I think that Eliezer and Greg have a problem adhering
to extropic principles unless there is some very magic hand-waving
done on why "extropians" or "SysOps" would choose to limint the
phase space that can feasibly be explored.

In short -- what do you do in the simulation where you get to the point
where you cannot explore any further without 'offing' someone against
their will?

[Now there may be parts of the Extropian Principles that deal with this
(I haven't checked them), but it would seem to me that the consequence
of this is that what we are really discussing is "Limited Extropianism"
and not a consciously driven (read probably more efficient) full-blown
extropianism where you fully explore the phase space.)

Note, there may be a form of "moral" extropianism where you explore
the phase space as fully as possible, but you do it with as little
"pain & suffering" as possible. This gets extraordinarily tricky
as for example would be the case where one want to determine whether
pain & suffering can drive people to self-enlightenment such that
they realize pain is something that "they" have and can choose
to experience in a variety of ways.

Robert



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:06:27 MST