Re: When Programs Benefit

From: Wei Dai (weidai@eskimo.com)
Date: Tue Jun 04 2002 - 11:28:34 MDT


On Mon, Jun 03, 2002 at 07:17:33PM -0700, Lee Corbin wrote:
> Well, on the literal level, programs experiencing various
> hardships, as you put it, in order to reach a goal are
> allowing the ends to justify the means: a very satisfying
> end is (in their value system) justifying unpleasant means.
> But I agree with you: it's exactly as unfair as what happens
> to someone who has made a bet at some cost, but the world
> ends before he is able to collect. But since this is rather
> normal in life---of those of us who die we can't all end on
> a winning note---I'm still surprised at how strongly you
> feel about this. Most of us are usually much more sorry
> and concerned merely about general suffering in the universe,
> whether or not some "goal" is involved.

I never said that I have that kind of value system or that I approve of
others having that kind of value system. It was an example to show that
not everyone's benefit from having run-time accrue monotonically, and
therefore you should get permission (i.e. informed consent) from someone
before running him, and should not halt someone involuntarily.

> and that's what set me off. You focus on what is a "point-
> benefit" in my value system. The satisfaction at the end of
> a long arduous project is what evidently strikes you as the
> major benefit.

Again, no, I'm just saying that some people could have that kind of value
system.

> I am much more of a reductionist: one must
> integrate over every volume of spacetime in order to calculate
> or estimate the total benefit an entity obtains from existence.
> That's why all the benefit from living accrues from all the
> worthwhile moments. Moreover (and here's where my values
> diverge from many others), I consider it illusory to assign
> a great Meaning or Purpose to some moments and not others.
> But this really can't be profitably discussed, I suspect,
> without a concrete thought experiment at hand. Unfortunately,
> I can't think of an appropriate one, right now.

Ok, let me know if you manage to think of one.

> Well, I think you are mixing distinct issues here. Matter is
> *taken over* in the manner I'm speaking of and in the manner
> I approve of when it is reorganized into a more *living* form.
> For example, my arm is hardly alive. It's not even conscious.
> Were alien technology to visit Earth and take over my arm,
> then my arm would begin to experience, and (it is hoped) to
> experience benefit far beyond my wildest imaginings. (It
> is hoped that I would not be inconvenienced by that---but
> there's no clear reason why I would even have to notice.)
>
> Your other issue is, What are property rights? It has been
> found by every human civilization, probably due to our
> hunter/gatherer adaptive lifestyle and the consequent way
> evolution has shaped our minds, that property rights are
> the best way of satisfying the most essentially built-in
> human needs, such as adequate nutrition, shelter, and
> clothing. Collectivist solutions fail in comparison to
> capitalist ones, which are built on the concept of private
> property and liberty. So even within the framework of all
> the customs of our successful societies, the astronaut who
> visited the moon and claimed it all would be seen as greedy
> and ridiculous (just as history views Balboa, who claimed for
> the King of Spain all the lands bordering the Pacific Ocean).

These two issues are related though. You seem to be saying that property
rights are a cultural norm that is currently pro-progress. But if the
alien technology were to visit Earth, people's property rights over their
own bodies are no longer pro-progress and therefore the aliens should
ignore them. Is that a correct understanding?

If our current property right norms are not going to be sufficient for the
future, what will replace them? You seem to be suggesting that property
rights should only apply between individuals with similar levels of
technology, while the Meta-Golden rule would apply between people with
different levels of technology. Is that correct?

> It is computationally infeasible at the present time, and
> in all the most extreme Singularities I've read about, for
> an intelligence to be certain that it is sitting at the
> pinnacle of the possibilities for advanced intelligence. So,
> for as far as we can currently see into the future, there
> appears to be no limit to how sublime matter and energy
> might be organized.

We can already see some of the limits. See for example my
"ultimate fate of civilization" post archived at
http://www.lucifer.com/exi-lists/extropians/1892.html.

But perhaps you're right, even a tiny chance that more advanced life is
possible could be enough to make it worthwhile to treat the less advanced
life nicely if doing so is sufficently cheap.

> Moreover, I advocate that when the Meta rule is used, a controlling
> entity at a high level permit total freedom at lower levels, just
> so long as there is no immanent threat of losing control because
> of the chance discovery of super-algorithms at the lower level.
> This *does* fit the "current human cultural norm" that you write
> about above. I hadn't quite looked at it this way. Thanks.

Sounds like Eliezer's Sysop Scenario, but arrive at from a completely
different route. Interesting...



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:35 MST