Mind abuse & technology development [was RE: New Government?]

From: Robert J. Bradbury (bradbury@www.aeiveos.com)
Date: Fri Aug 27 1999 - 05:20:55 MDT


On Thu, 26 Aug 1999 hal@finney.org wrote:

> There will be new minds created, conscious and full of potential, but
> some of them will have different opportunities and training than others.

I think we have a problem with the term "mind" here. We have the
concept of a conscious entity. We have the concept of creating
an "unformed" conscious entity and letting it evolve (children).
We have the concept of uploading oneself onto new hardware.
We have the concept of creating backup copies to be reactivated
in the case of an accident.

Q1: Do you "own" the backup copies?
    (After all, you paid for the process (or did it yourself) and
     it is presumably on hardware that is your property.)
Q2: Do you have a right to "edit" the backup copies?
    In some forms, this would be considered "brainwashing", which
    seems morally repugnant.
Q3: If you "edit" the backup copies when they are "inactive"
    (so they feel no pain) and activate them are they new individuals
    with their own free will or are they your "property" (i.e. slaves)?
Q4: If you are the "overlord" (an SI evolved from hal with some beta
    test code leased from the Anders & Robert SIs), and you have a
    subprocess running (say an "original" copy of hal, operating in a
    virtual reality to see what would have happened if you *had* tried
    to kiss the girl on your first date). Say, hal[0] kisses the
    girl, goes on to marry her, gets browbeaten into living a
    "normal" life, never discovers the extropian movement, etc.
    The simulation becomes uninformative to the overlord, so
    it is terminated and all copies of hal[0] are erased.
    Have you done something morally wrong?
Q5: How does one judge or evaluate moral systems?
    The female praying mantis, eating the male praying mantis
    for protein for the offspring -- is this "natural" or "wrong"?
    A human killing a human is wrong in the context of our current
    reality, but is perfectly reasonable for an overlord who has
    lots of them. It is morally incorrect (and should be punished)
    if SIs throw black holes at other SIs. Moral systems are totally
    irrelevant since in the long run the universe runs down anyway...

>
> Ethical problems arise similar to issues of child abuse, or of children
> who are not given the same advantages of others. If your neighbor on the
> next asteroid is creating sentient subminds and not letting them grow and
> develop, you might be really unhappy about that. You and your buddies
> might even be able to do something about it. But first you need some
> ethical guidelines for which kinds of minds are proper and which are not.

I've been thinking about this problem -- my moral sensibilities are
bumping into my freedom seeking libertarian leanings. It gets worse
because you clearly would argue that someone molesting a child or
raping another conscious entity (i.e. violating their free will)
is doing something "wrong". But if you are doing these things in
"virtual reality", nobody is really getting hurt, so it isn't wrong.
But once your "virtual reality" expands to include "conscious minds",
then it is wrong. But a "conscious mind" is to an SI, what a
thought or a dream is to one of us. So if it is wrong for an SI
to create scenarios in which conscious minds are molested/raped/killed,
then it is wrong for us to "imagine" doing those things as well.
But why would it be wrong to "imagine" something if it isn't real?
Why is it wrong for an SI to do anything (since it is running nothing
but a big computer simulation)? The reality is the processors
orbiting on platforms around a star (or some other architecture).

It gets worse, because if you argue that if its not wrong to
"imagine" molesting/raping/killing and/or for an SI to freely
manipulate "conscious minds", then you are on a slippery slope.
Then you are in a place where "morality" depends entirely
upon (a) the relative "seniority" (?) of the actor as compared
with the "person" being acted upon; or (b) the physical
reality of the actions.

It seems to me its a really big mess.

>
> Hopefully, by the time this is a serious problem in 2020 or 2030,
> technology will be helping out. Even without a full nanotech singularity
> we can expect improved medicine so that people can have longer and
> healthier working lives, and technologies to amplify productivity so that
> a smaller work force can provide enough goods for an aging population.

I think it arrives by ~2010 or before. Scientific American (Sept. '99)
has an interesting piece "Enter Robots Slowly", pg 36-37, discussing
gyro-balanced wheelchairs that can traverse uneven terrain and climb
stairs with an occupant (using only 3 pentium-class processors).

Things are going faster than even optimists such as myself predict.
Compaq has announced 8-way SMP Xeons (in line with the predictions
I've been keeping for the semiconductor industry for the last few
years). But I've been stunned to the point of falling out of
my chair with IBM announcing 64-way SMP (next year) and 500 MHz buses.

SMP = symmetric multi-processing.

My gut instinct is telling me that the biotech industry will
many provide similar surprises over the next 10 years.

Anyone who thinks nanoassembly will not arrive until 2030 is a luddite!

Robert



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:54 MST