RE: When Programs Benefit

From: Lee Corbin (lcorbin@tsoft.com)
Date: Sun Jun 02 2002 - 10:08:59 MDT


Wei Dai writes

> I would argue that [even conscious] programs don't benefit directly
> from getting run time. They benefit from achieving goals that require
> run time. Thus if you halt a program before it finishes running or
> achieves its goals, it may not benefit at all.

As people are programs too, are you saying that this is also true of
human beings? That is, that someone who dies before achievement of his
or her goals hasn't benefited from living at all? Surely not.

> Suppose the program is conscious and has spent a lot of subjective
> effort trying to reach some goal, and you halt it just before it
> does. That seems pretty bad, worse than not running it at all.

This touches an important component of values, either conscious or
unconscious, that perhaps many people have. It's an extreme form
of the principle "the ends justify the means". We have to return
to this issue later, and in perhaps greater generality.

> I think a good ethical rule would be to always tell the program the
> computational resources it can access and ask whether it wants to be
> run given those limitations. That way you never have to halt a program
> involuntarily.

I agree with that ethical rule, but not for same reason.
If I'm living in a simulation, why wouldn't someone wish
to inform me of the fact, as well as how much resources
I have at my disposal? <Insert dig at Federal government
here since they can pass new law & confis what I have or
raise tax>

> > The core issue is whether *freedom*, which has worked so marvelously
> > until now in human history, will be ascendant in the future, or
> > whether there will be a single morality imposed from above. In other
> > words, will I be free to run the algorithms I choose on my resources,
> > and will others be free to run me? Unless some nightmare eventuates,
> > freedom may actually turn out to be the only computationally feasible
> > choice. It really is, even now.
>
> But you can't be sure whether you're living in a simulation or not.
> Perhaps whatever resources you think belong to you actually belong to the
> person who started this simulation, and therefore under your ethical
> system you're not free to run any algorithms without his permission. Are
> you sure that's what you want?

For sure, this is what I want. Yes, not only this instance of myself,
but all the resources apparently under my control in reality belong to it.
In effect, I have been given resources to use as a child is given toys
to play with. A bit demeaning, perhaps, but vastly better than not
running at all, and fully proper since the resources did and do really
belong to the person running me.

> By "freedom" you seem to mean people should be able to whatever they want
> with property they own. I wonder what is your position on how something
> that is initially not owned by anyone (e.g., a piece of the Moon) should
> become private property?

A piece of the moon, which is non-living, should be taken over
by the nearest life that is capable of doing so. After it is
sentient, and infinitely vaster and more advanced algorithms
show up to take over, they should do their best to observe the
Meta-Golden rule: to that sentient life they discover, throw a
few crumbs of run time, so that when in turn they are overcome by
an even advanced life, they too won't simply be discarded.

> Also, how does something that is initially someone's private
> property (e.g., an egg cell) become self-owned (e.g., a free
> person)?

In the general case, as evidenced in the painful I-word thread,
that property should never become self-owned. (Present humanity
has culturally evolved an exception: when a c***d has had a few
years of run time, it ceases in human communities to be the
property of its parent processes, and obtains legal rights and
citizenship---because human/primate Earth history showed that
the cooperative bands of humans achieved progress faster with
citizens having legal rights. But this last *explanation* is
of course only my conjecture. All we know for sure is that
legal rights and freedom worked for human societies, but not
exactly why.

> What philosophical principles explain/justify these transitions?

(1) Because they work. That is, from our limited familiarity with
life in the universe, micro-managing what your children do, or what
your employees do, or what your algorithms running on your computronium
do, appears to be anti-progress; and (2) because I approve of them.

Lee



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:33 MST