Re: Arbitrariness of Ethics (was singularity logic loop)

From: Samantha Atkins (samantha@objectent.com)
Date: Sun Apr 28 2002 - 15:25:45 MDT


Lee Corbin wrote:

> Samantha wrote
>
>
>>Lee Corbin wrote:
>>
>>
>>>There are just two reasons that it might be nice to
>>>humans, so far as I know: one, I. J. Good's meta-
>>>golden rule... and two, because someone built
>>>it to be so nice.
>>>
>>Or it decides that non-arbitrary ethics preclude destroying
>>other sentients as much as possible.
>>
>
> That "it decides" that there exists a non-arbitrary
> ethical system is no different, so far as I can tell,
> from "someone built it to be so nice".

Perhaps not in effect but if there are non-arbitrary ethical
norms then that is significant over and above the behavior of
our hypothetical SI.

>
> You may mean that a sufficiently intelligent and objective
> AI must deduce as a logical or scientific truth that a
> certain ethical system is correct. I've never seen any
> evidence that such exists.
>

Do you then eschew ethics? If not, then do you believe a vastly
more intelligent being without your evolutionary programming would?

 
>
>>>As the speed of light becomes so slow (from the SI
>>>perspective) evolution even over the Earth could break
>>>apart and become localized quickly, and in that case
>>>systems that allowed anything to hold back their own
>>>development would be at a serious disadvantage.
>>>
>>This has the built-in assumption that endless-growth is such a
>>strong and mandatory drive that all else is subservient. I
>>don't think this is a given.
>>
>
> If an SI that at one time did control the entire Earth were
> to break apart into sufficiently many pieces, as I was trying
> to suggest, then any variation in acquisitiveness would result
> from evolutionary principles in the more acquisitive growing
> at the expense of the less acquisitive.
>

There are many factors that make for the success, not to mention
the well-being, of an entity or social group beyond mere
acquisitiveness. That is what I was trying to introject.
Sometimes we seem to all but assume that acquisitiveness is a
nearly unlimited good.

>
>>There are no other SIs in the neighborhood we can detect
>>and it is not a given that they could think of nothing
>>better than endlessly competing with one another throughout
>>space-time.
>>
>
> Indeed, they could come to such an agreement, but as I said,
> if they are sufficiently many, then the agreement might be
> as unstable as an oil cartel's.
>

It might. But I still hold out some hope of a vastly superior
intelligence being a good deal more rational, ethical and
valuing of cooperation and support among entities than humans
generally are. I can't "prove" it will go that way. Part of why
I cannot is the only way it will happen is by the decisions and
understanding of the sentients involved, including ourselves. I
cannot prove how we will in fact decide or that we will manage
to act in conherence with our decisions.

>
>>Precisely why should it be that hungry or that hungry that
>>quickly? Cancerous levels of growth until all resources are
>>consumed are simply not the only viable models of Singularity
>>level beings.
>>
>
> Well, nothing is certain, but where there is vast potential,
> very much diversity among systems results in that potential
> being exploited.
>
> Some futurists, I take it, pin their hopes on a single all-
> encompassing AI that would take over the Earth and solar system,
> and institute a permanent Pax by force. That could happen,
> though it's by no means certain that that's how everything is
> going to shake out. But even if it does, the argument I alluded
> to above about the speed of light and local information processing
> would strain the internal loyalty of such a system.
>

I pin my hopes on all sentients of however much raw power coming
to a mutual cooperative peace through realizing that they can
all benefit much more from such an arrangement than from thinly
veiled mutual distrust and soft (sometimes hard) war. Unlike
hoping for a single SI to declare peace the possibility of what
I hope for is directly proportional to what each of us decides
and how each of us chooses to live.

- samantha



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:41 MST