RE: When Programs Benefit

From: Lee Corbin (lcorbin@tsoft.com)
Date: Mon Jun 03 2002 - 20:17:33 MDT


Wei Dai writes

> Hmm, I wonder if you misunderstood me. What I meant is that if a program
> paid a high subjective price (i.e. experienced various hardships) in
> trying to reach a goal without knowing that it could be halted at any
> time, and you involuntarily halt it just before it does, that seems pretty
> bad. I'm not sure what the connection to "the ends justify the means" is.

Well, on the literal level, programs experiencing various
hardships, as you put it, in order to reach a goal are
allowing the ends to justify the means: a very satisfying
end is (in their value system) justifying unpleasant means.
But I agree with you: it's exactly as unfair as what happens
to someone who has made a bet at some cost, but the world
ends before he is able to collect. But since this is rather
normal in life---of those of us who die we can't all end on
a winning note---I'm still surprised at how strongly you
feel about this. Most of us are usually much more sorry
and concerned merely about general suffering in the universe,
whether or not some "goal" is involved.

To recapitulate, you had originally written

> I would argue that [even conscious] programs don't benefit directly
> from getting run time. They benefit from achieving goals that require
> run time. Thus if you halt a program before it finishes running or
> achieves its goals, it may not benefit at all.

and that's what set me off. You focus on what is a "point-
benefit" in my value system. The satisfaction at the end of
a long arduous project is what evidently strikes you as the
major benefit. I am much more of a reductionist: one must
integrate over every volume of spacetime in order to calculate
or estimate the total benefit an entity obtains from existence.
That's why all the benefit from living accrues from all the
worthwhile moments. Moreover (and here's where my values
diverge from many others), I consider it illusory to assign
a great Meaning or Purpose to some moments and not others.
But this really can't be profitably discussed, I suspect,
without a concrete thought experiment at hand. Unfortunately,
I can't think of an appropriate one, right now.

> > A piece of the moon, which is non-living, should be taken over
> > by the nearest life that is capable of doing so.
>
> You can't literally mean "nearest" since the moon is moving and a
> different person is nearest it every second. You must mean whoever is able
> to take over it first should do so.

Yes.

> But how do you draw the line between "taking over" and "just visiting"?
> Suppose some astronaut visited the moon, claimed it for himself, then
> returned to Earth. Should everyone else then respect his "property right"
> over the moon?

Well, I think you are mixing distinct issues here. Matter is
*taken over* in the manner I'm speaking of and in the manner
I approve of when it is reorganized into a more *living* form.
For example, my arm is hardly alive. It's not even conscious.
Were alien technology to visit Earth and take over my arm,
then my arm would begin to experience, and (it is hoped) to
experience benefit far beyond my wildest imaginings. (It
is hoped that I would not be inconvenienced by that---but
there's no clear reason why I would even have to notice.)

Your other issue is, What are property rights? It has been
found by every human civilization, probably due to our
hunter/gatherer adaptive lifestyle and the consequent way
evolution has shaped our minds, that property rights are
the best way of satisfying the most essentially built-in
human needs, such as adequate nutrition, shelter, and
clothing. Collectivist solutions fail in comparison to
capitalist ones, which are built on the concept of private
property and liberty. So even within the framework of all
the customs of our successful societies, the astronaut who
visited the moon and claimed it all would be seen as greedy
and ridiculous (just as history views Balboa, who claimed for
the King of Spain all the lands bordering the Pacific Ocean).

> > A piece of the moon, which is non-living, should be taken over
> > by the nearest life that is capable of doing so. After it is
> > sentient, and infinitely vaster and more advanced algorithms
> > show up to take over, they should do their best to observe the
> > Meta-Golden rule: to that sentient life they discover, throw a
> > few crumbs of run time, so that when in turn they are overcome by
> > an even advanced life, they too won't simply be discarded.
>
> Your Meta-Golden rule [actually, I. J. Good's Meta Golden Rule]
> only works if everyone believes there's an infinite hierarchy of
> more and more advanced life, which seems unlikely. If there
> is a most advanced life, it has no incentive to follow the
> Meta-Golden rule, and then the second most advanced life has
> no incentive to follow it, and so on.

It is computationally infeasible at the present time, and
in all the most extreme Singularities I've read about, for
an intelligence to be certain that it is sitting at the
pinnacle of the possibilities for advanced intelligence. So,
for as far as we can currently see into the future, there
appears to be no limit to how sublime matter and energy
might be organized.
 
> > In the general case, as evidenced in the painful I-word thread,
> > property [originally belonging to others should never become
> > self-owned. (Present humanity has culturally evolved an
> > exception... because human/primate Earth history showed that
> > the cooperative bands of humans achieved progress faster with
> > citizens having legal rights. But this last *explanation* is
> > of course only my conjecture. All we know for sure is that
> > legal rights and freedom worked for human societies, but not
> > exactly why.
>
> I don't understand this part. Why do you think the current human cultural
> norm should be the exception rather than the rule? It seems to work for
> humans, so why not more generally?

Intelligence happened to develop among primates in separate
skulls. This happened because the primates descended from
animals whose intelligence was individual, not collective.
In turn, it always happened that no matter how big or how
strong or how smart a primate was, it was helpless in the
hands of just a few other individuals acting in concert.
But a Singularity will develop so quickly that the solution
evolution found for animals probably won't apply IMO. There
will instead be a tendency for just one singularity nexus to
take over the solar system. Yet, (of course) you recall our
earlier discussions about this in the Post Singularity Earth
thread. The speed of light could become so slow that even
centimeters or meters away, separately advancing entities
could form. In that case, I agree with your point: we don't
know but that evolution will rediscover property rights.

Moreover, I advocate that when the Meta rule is used, a controlling
entity at a high level permit total freedom at lower levels, just
so long as there is no immanent threat of losing control because
of the chance discovery of super-algorithms at the lower level.
This *does* fit the "current human cultural norm" that you write
about above. I hadn't quite looked at it this way. Thanks.

Lee



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:35 MST