Justin Corwin wrote:
>
> suppose i want to model some intelligences. they're human equivalent, or
> maybe a little less than human level, in an abstract environment, to
> investigate a minor point in game theory that i think is relevant to optimal
> intelligence. In the course of the simulation, most of them will be hurt. at
> the end of the simulation, i'll reclaim the mass involved, effectively
> killing them all. Does the Sysop give an API error when i attempt this
> experiment?
Would you like to find out that you, yourself, are simply a modeled
intelligence in someone's imagination? That you therefore have no
citizenship rights?
Clearly, then, you can't run this simulation at such fine granularity as
to result in actual citizen formation; you'll have to think about it at a
higher level of abstraction. I don't expect this will pose a problem.
> i KNOW this is a repeat, but i have to ask, assuming above scenario,(or at
> least something else that pisses me off) why can't i just run from sysopian
> space.
What are you going to use for a starship? Matter, right? Where'd you get
the matter? Our Solar System. All the matter here is SysopMatter, though
this doesn't show up unless you want it to, or you try to zap someone
without their consent. Totally transparent omnipresence doesn't look very
complicated to me. Your starship is also SysopMatter; so, probably, are
you, though I suppose unmodified biology can probably be safely
Sysop-free. If you build anything when you get to Centauri, you'll build
it using SysopMatter tools that produce more SysopMatter (assuming
Centauri hasn't already been colonized or declared a tourist reserve).
All of this, again, is probably totally transparent - at least if you're
the kind of Luddite who prefers transparency - until you try to torture
somebody.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:40 MDT