Anders Sandberg wrote:
>
> On Wed, Dec 26, 2001 at 01:22:43PM -0500, Eliezer S. Yudkowsky wrote:
> >
> > What happens to communities who refuse to conform to this "universal
> > constitution"? How exactly is this universal constitution enforced? If
> > one of the communities violates the constitution by creating and enslaving
> > sentients (entirely inside its own walls), is this violation detected, and
> > if so how?
>
> You are apparently thinking more in terms of AI slavery than political
> prisoners. Whether the consitution would be about sentient rights or human
> rights is of course important in the long run, but setting up a system
> somewhat like the above federation is something we can do in the near
> future. This system can then adapt to new developments, and if the
> constitution update process is not unnecessarily rigid it wouldn't be too
> hard to include general sentient rights as people become more aware of
> their possibility.
AI slavery is less expensive than biological slavery, but if you insist
that there is any difference whatever between the two, it's easy enough to
imagine Osama bin Laden biologically cloning seventy-two helpless
virgins. From my perspective, these are people too and they have just as
much claim on my sympathy as you or anyone else. If it's worth saving the
world, it's worth saving them too.
> The important thing to remember about systems like this is that we do not
> have to get everything perfectly right at the first try. Good political
> solutions are flexible and can be adaptive.
1) This sounds to me like a set of heuristics unadapted to dealing with
existential risks (and not just the Bangs, either). Some errors are
nonrecoverable. If Robin Hanson's cosmic-commons colonization race turns
out to be a Whimper, then we had better not get started down that road,
because once begun it won't stop.
2) The cost in sentient suffering in a single non-federation community,
under the framework you present, could enormously exceed the sum of all
sentient suffering in history up until this point. This is not a trivial
error.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:31 MDT