From: Anders Sandberg (asa@nada.kth.se)
Date: Wed Dec 26 2001 - 16:17:33 MST
On Wed, Dec 26, 2001 at 03:28:54PM -0500, Eliezer S. Yudkowsky wrote:
> >
> > You are apparently thinking more in terms of AI slavery than political
> > prisoners. Whether the consitution would be about sentient rights or human
> > rights is of course important in the long run, but setting up a system
> > somewhat like the above federation is something we can do in the near
> > future. This system can then adapt to new developments, and if the
> > constitution update process is not unnecessarily rigid it wouldn't be too
> > hard to include general sentient rights as people become more aware of
> > their possibility.
>
> AI slavery is less expensive than biological slavery, but if you insist
> that there is any difference whatever between the two, it's easy enough to
> imagine Osama bin Laden biologically cloning seventy-two helpless
> virgins. From my perspective, these are people too and they have just as
> much claim on my sympathy as you or anyone else. If it's worth saving the
> world, it's worth saving them too.
Sure. Do you think Osamaland would be accepted by any reasonable federative
constitution? (OK, the UN does allow pretty much anything as a member
state) Even if it is technically legal by using some loopholes, it is darn
likely most other members would take steps to close those loopholes. My
point is that we are talking about getting real people set up real
political systems in the real world, and while you and I think that AI will
be relevant sometime soon, we better convince people about that before
making suggestions and political visions contingent upon our specific
assumptions about AI. Even if this federation when it is set up does not
have the least legal protection of AIs or clones, if the basic constitution
and the shared assumptions of the forming communities are sane enough, the
system can include this as it is beginning to seem relevant. Trying to set
up a perfect system from the start is bound to get lost in minutiae,
spending inordinate amounts of energy on possibilities that do not play out
and would likely end up in the top-down type approach.
> > The important thing to remember about systems like this is that we do not
> > have to get everything perfectly right at the first try. Good political
> > solutions are flexible and can be adaptive.
>
> 1) This sounds to me like a set of heuristics unadapted to dealing with
> existential risks (and not just the Bangs, either). Some errors are
> nonrecoverable. If Robin Hanson's cosmic-commons colonization race turns
> out to be a Whimper, then we had better not get started down that road,
> because once begun it won't stop.
Which existential risks are relevant? When planning to act in any way you
have to make estimates of risks. If the risk is too great, you become
careful and may even avoid certain actions if the risk is unacceptable. In
many cases both the probability and the severity of the risk are unknown,
and have to be gradually refined given what we learn. Basing your actions
on your prior estimate and then never revising it would be irrational. So
what is needed is systems that allow us to learn and act according to what
we have learned.
The first part of this, learning, clearly benefits from pluralist
approaches since many different lines of inquiry can be pursued and they
can compete on the market of ideas. This is far more likely to give better
estimates of risk than non-pluralist approaches. As for behavior control,
most communities will behave according to their self-interest and hence not
take more risks than they estimate is worth it. Now, since these
communities are linked by not just a constitution but also by the strands
of an open meta-society there will also be a cross-community interaction
(through trade etc) that is likely to bias individual communities
estimateions towards a risk concensus. It is to a large extent a
distributed agoric system.
I know some readers will respond with "But this does not guarantee that the
federation doesn't take risks with Things Man Was Not Meant To Know!". That
is true. But would it on average take *more* risks than a reasonably
rational individual? It doesn't seem likely. The one exception would be if
some member communities were little linked to the rest and hence did not
share any risk consensus; in this way the likeliehood of someone taking a
risk greater than the average person would consider reasonable would be
higher - it is the usual problem of having many free agents in a system.
The federation doesn't solve it, but it does ameliorate it by providing one
linking mechanism.
A quick aside:
I think there is a certain risk involved with the concept of existential
risks, actually. Given the common misinterpretation of the precautionary
principle as "do not do anything that has not been proven safe", even the
idea of existential risks provides an excellent and rhetorically powerful
argument for stasis. Leon Kass is essentially using this in his
anti-posthuman campaign: since there may be existential risks involved with
posthumanity *it must be prevented*. And since posthumanity depends on
learning more about certain areas, inquiry into these has to be curtailed -
which incidentally makes improved risk estimation harder or impossible.
Even here on this list we sometimes hear totalistic arguments for global
police forces or worse atrocities supported by the idea that they would be
necessary to counter an existential risk.
This is not an argument against discussing existential risks, but rather
against invoking them without thinking of their epistemological context.
> 2) The cost in sentient suffering in a single non-federation community,
> under the framework you present, could enormously exceed the sum of all
> sentient suffering in history up until this point. This is not a trivial
> error.
No, but that is not something the federation was supposed to solve either.
The fact that there is awful suffering and tyrrany in some countries
doesn't invalidate the political system of the US. The federation is not
based on an utilitarist ethical perspective where the goal is to maximize
global happiness.
I distrust the search for global optima and solutions that solve every
problem. The world is complex, changing and filled with adaptation, making
any such absolutist solution futile, or worse, limiting. I prefer to view
every proposed solution as partial and under revision as we learn more.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:12:51 MST