When Programs Benefit

From: Lee Corbin (lcorbin@tsoft.com)
Date: Tue May 28 2002 - 07:54:19 MDT


In "Censorship" Hal Finney writes

> Pending that Extropian future, it seems to me that there is a very
> simple and practical way that we can discuss many of these controversial
> issues without triggering the emotional reactions that people have found
> so upsetting. That is to recast the problem in a more abstract form.

Yes, exactly! Yet it is a more general outlook that already drives,
or strongly informs, the views of many of us on current issues.

> We are going to be dealing in future decades with artificial life forms
> about which we have few emotional instincts.

Quite right, but just as our emotions are an important part of
our thought processes today, they shall be then too, hopefully.

> Issues that carry heavy emotional baggage with regard to human beings
> can be discussed much more easily with regard to artificial life forms.

It is most important to strive for consistency. We all have
parts of our judgments and tastes that we are unwilling to
abandon. These strongly affect our preferences in more abstract
terms. But just as you say, it also works the other way: what we
conclude that we want to believe about artificial life forms must
inevitably apply to present day issues, if we want to be consistent.

For example, my view that the death of a child of age 12 is not as
bad as the death of the child at age 7, and that that isn't so bad as
as being aborted, and that's better than never being conceived at all,
comes directly from the more abstract consideration of the general
desirability of getting run time.

At some point in the relatively near future, a program will benefit.
What do you want to do with the computer resources under your control
when the running of certain programs becomes a moral issue? Is it
ethical to stop such an execution once it's started? What I definitely
do *not* want to happen is for people to never run programs that benefit,
simply to avoid the issue of halting them!

It would perhaps even be wise for each of us to publicly state that
he or she grants permission to halt execution of ourselves to anyone
who starts us, just so that entities are not inhibited from running
us in the first place.

But what applies to ourselves should, for the sake of consistency,
apply to others, and vice-versa. For example, the cryonicist's
version of I. J. Good's Meta Golden Rule says "we revive or grant
run time to former persons in part so that future persons will
revive or grant run time to us."

People are algorithms in "person-space" I say; that is, if we consider
the entire space of algorithms and the subspace of those capable of
experiencing benefit, people are a yet further subset of that. The
thread of a person's life weaves around in person-space, and who "you"
identify with is a large fuzzy sphere centered about your present
location.

I raise these abstract conceptualizations so that we don't make a
mistake: we should have our notions as general as possible when
discussing these difficult issues (thanks, Hal), but also it is
essential to at the same time prepare for the post Singularity
ethical issues. Thank goodness, a number of people are already
doing that.

The core issue is whether *freedom*, which has worked so marvelously
until now in human history, will be ascendant in the future, or
whether there will be a single morality imposed from above. In other
words, will I be free to run the algorithms I choose on my resources,
and will others be free to run me? Unless some nightmare eventuates,
freedom may actually turn out to be the only computationally feasible
choice. It really is, even now.

Lee



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:26 MST