From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Thu Mar 15 2001 - 21:10:55 MST
> I discussed whether the distinction between the valuation of "beings" in
> a the "virtual world" vs. the "real world" was a binary state (yes/no)
> or gray scale.
In reply, On Thu, 15 Mar 2001 hal@finney.org wrote:
>
> I think what Nick means is that it is irrelevant whether you are
> dealing with a "simulated" being or a real one. He is not saying that
> consciousness is the deciding factor, merely that simulation-vs-real
> doesn't matter. If you don't care about a simulation with a certain
> level of consciousness, then you shouldn't care about a real being with
> that level of consciousness.
>
> The question of what level of consciousness deserves consideration
> is independent of this.
If that is what he is saying, then I would disagree completely
and with Intestinal Fortitude.
(As an aside, I'll simply note that it will be too bad over the
next decade that agents like those creating the Extropian archives
are not a "little" more intelligent about the links -- so much of
the humor in these discussions is going to be lost for the lack of
even rudimentary intelligence regarding interposting links... Sigh.
Sasha would have offered a strategy for fixing this. Or at least I
can imagine that he would have.)
There is a critical distinction between a basement level "real"
self-conscious entity and one that is running on a simulation of
some sort. In the first case, you can make a reasonable claim
that you are a "free agent", in the second case, that assertion
is very dubious. In the first case you are a pseudo-random
manifestation of the computronium of the real "universe", in
the second case you are a manufactured being functioning as
a lab rat in an experiment (most probably). For Nick's "simulation"
vs. "real" "morality" (?) to hold as equivalent, you have to damn as
immoral many scientific experiments now involving the genetic or
environmental manipulation of mice or rats. No they are not "self-aware"
but one cannot simply throw out our perceived degree of self-awareness or
self-consciousness. For example, Data, unlike humans can run a
"self-diagnostic". He (hopefully) can "discover" if his mental
state has been corrupted by radiation damage or viruses and allow
that to factor into his assessment of the reliability of his positions
or conclusions. Can "self-conscious" humans do this? Given the
ability to be aware of and run self-diagnostics, one would
presumably assume that such individuals have a higher level of
self-"consciousness" than that which humans currently lay claim to.
However, I think few humans alive today would have reservations
about reducing Data back to his constituent atoms (he is a cyborg
after all) compared with reducing a flesh-and-blood "human" back into
cosmic dust. Human valuations on perceived self-consciousness
are highly context specific.
To be horribly graphic about the distinction, we, in our current
world seem to allow a critical distinction between the situation
in which I "really" abduct a woman, brutally abuse her and
then finally kill her and dispose of her body in some remote
location and the situation in which I simply "imagine" doing that.
(For the women on the list, we can translate this "doing" vs.
"thinking about it" analogy to cutting off self-perceived 'valuable'
body parts of males who have 'dissed' you at one time or another,
frying them up on the stove and feeding them to your dog or cat).
[I may sexist, but I try to be an equal opportunity sexist... :-)]
The critical distinction in our society seems to be *whatever*
you "think" is ok, but we will regulate what you "do". In
the SI reality, what you "think" becomes manifest in a sub-SI
reality, i.e. it becomes "doing". But the doing is completely
artificial and divorced from the reality in which you operate
(as my "thoughts" are from what I do when I go out and take a
walk in the "real world").
Now, Eliezer has proposed that the SysOp doesn't even let you
"imagine" executing crimes (at least in my interpretation
of what he proposes); perhaps it is less than that -- you can
design all world destroying nanobots you want, it simply will
never manufacture them. If his vision is closer to the former
than the later, then that is an operating system in which the
thought police prevent you from ever even "going there".
However, the implementation of the "thought police" is a
swamp that may lead to stagnation and self-mummyification.
It also permanently closes off approches to the exploration
of the phase space of what is "possibly feasible". Lots
of self-conscious humans had to die horrible deaths to get
us to where we are today. Make a case that that should
cease to get us where we might go in the future.
Robert
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:06:24 MST