Re:Arbitrariness of Ethics (was singularity logic loop)

From: Doctor Logic (hidden@extropy.org)
Date: Mon Apr 29 2002 - 10:36:55 MDT


>I believe that the long range or long view always favors
>cooperation. As we increase in capabilities and in practical
>intelligence and in real abundance I find it much less likely
>that war would seem like a good alternative. I didn't say no
>defense if attacked though.

As long as there more than a handful of other intelligences on
the planet, they will represent a threat to an SI (and probably
themselves).

In the future, our powers will be enhanced by technology.
Perhaps we will all have back-yard fusion reactors. The power
to do harm to the system through misuse will be magnified.

I agree that cooperation is the best approach, but it only takes
one misbehaving entity to ruin it all for the rest. Evolution doesn't
respect rules, only opportunities.

That leaves only a few solutions for a stable world in which
intelligence can survive.

1) The biggest SI on the block assimilates all other intelligences
as simulations.

2) The threat of mutually assured destruction keeps all intellects in line.

3) Intelligences leave Earth and colonize other worlds so that
they cannot be assimilated. This is workable until we develop
interplanetary strategic weapons. ;)

As for whether assimilation is "nice"...

Imagine that you had the power to absorb humans as simulations.
These simulations might use only a fraction of your resources, and
need never die. They could even live eternal lives in a virtual heaven.
Then you look at the Middle East conflict. You think "Hey! Those guys
are gonna hurt somebody, or themselves. They might even set of
WWIII. I'll do them a favor..."

Even a benevolent SI can see the good in this proposition.

-DL

----
This message was posted by Doctor Logic to the Extropians 2002 board on ExI BBS.
<http://www.extropy.org/bbs/index.php?board=61;action=display;threadid=51629>


This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:13:41 MST