Re: Singularity Card Game Alpha Test

From: Adrian Tymes (wingcat@pacbell.net)
Date: Tue Mar 05 2002 - 14:32:05 MST


Anders Sandberg wrote:

> On Sun, Mar 03, 2002 at 09:28:40PM -0800, Adrian Tymes wrote:
>>Someone asked for the rules to this. Like I said, there were no formal
>>rules, but here's my best shot at making some up.
>>
>>http://www.wingedcat.org/singulcard/
>>
>>Feel free to forward this URL to anyone else that may be interested.
>>If I get enough suggestions - *especially* ideas on how to fill up the
>>card list - I think I know someone who could get this tested and
>>published (Cheap-Ass Games, or a similar publisher). But that won't
>>happen with just four cards on the list; I'd need at least forty (or
>>somewhere in that neighborhood) before I'd approach them with this.
>
> I like the basic simplicity of the game, a bit like Nim. The
> question is whether to keep it simple and clean, or more complex but
> also more related to real ideas about technological futures.

If you make it too complex, the target audience - non-sophisticated
people who have open minds - will shrug and pass on to the next
distraction. The primary reason for creating this game is not the game
itself, but memetics. Therefore, if we can find simple ways to
incorporate ideas about the future (for instance, uploads and similar
tricks == no more absolute robots vs. humans distinction), it may be a
good addition - but too much complexity in game implementation is,
itself, reason enough to reject a given idea for this particular game.

Or, in short: eyes on the prize.

> My main problem with it is the technological and economic
> assumptions. Is the singularity really a goal in itself, and why
> would a disaster automatically happen if the points are unbalanced?
> It seems to assume technology is developed because somebody just
> decides to, and that things can be perfectly banned.

Ah, no. You're misreading things...but perhaps, if I explain, we can
find a clearer way of stating these concepts. (I.e., that you read it
as "perfect bans are possible" is a bug, just like in programming.)

Technology cards, when played, represent the *commercialization* (or
other widespread deployment) of technology. The actual, mere
development of a technology is a non-event as far as this game is
concerned; when a tech card is played is when the tech starts
affecting Joe Q. Public. Likewise, bans - though not perfect - do
remove a technology from most peoples' lives, though the factories et
al to produce the banned tech can be mothballed...or, at least, the
information about how to set up said factories is still floating around.
Either way, once it has been deployed, it can and will be redeployed
once the laws against it are removed.

For example: cloning is in the media right now. But, can an average
person purchase a clone at the moment? No way. Therefore, the Cloning
card has not yet been played. Now, this might be a candidate for an
Event - which cannot be undone, and which can slightly change the rules
of the game. Which is why I set up said category.

> I would like to
> make the game a bit more Illuminati-like, even if this may lose
> some of the elegance.

Umm...it's a nice idea, but I fail to see how this implementation adds
to the goal of the game. (See above.)

> The goal should be to make the situation such that everybody can
> profit from developing dangerous technology, but that cooperation is
> necessary to uphold bans. Some technologies that are very useful
> (like nanoimmune systems) also require dangerous technologies.

Prerequisites are already explicitly allowed for. Adding in the profit
motive taints the meme with "tech is developed so the rich get richer",
not "tech is developed to make the world a better place". While they
are both true, it is better to promote only the former meme, not the
latter.

I am, however, thinking of putting in an optional other action: Forfeit.
"Your group needs to develop new technologies to stay relevant to the
world. If it sleeps, it dies...and you lose the game automatically."

> The game ends in a disaster if too much risk points are accumulated.
> For example, Nano-manipulators have just +1 Nano risk (they are in
> themselves not very risky) while Desktop Fabrication has perhaps +3
> risk (it is an application) and Free Replicators +10 risk. Risk can
> be reduced by developing safeguard technologies such as Nanoimmune
> Systems (-10 Nano Risk) and policies such as Secure Nanolabs (-2
> Nano Risk). So the race is to gain enough points while not allowing
> the risk to grow too large.

The point is that the knowledge itself, and the world's possession of
it, *is* the risk. And the promise. The two are the same.

> I would like to add a fourth kind of risk: Social risk. While
> technologies may cause other risks, bans make society more
> dangerous. If the Social risk goes too high, we end up with a social
> disaster. The risks can easily be represented by markers moved along
> a line.

Problem: what is a disaster for some may be heaven for others. The
world will not end if, say, America becomes a police state under martial
law, even if it would suck. The world would recover from such a state
within (current) human lifespans.

> Each turn, a card is also drawn from the deck that is played by the
> "world" - possible events and developments not foreseen by the
> players. If it is not a valid play it will vanish, but if the
> technology or event is possible to play it will be played.

That's a good idea. Makes the game a bit less predictable, thus adds
more risk to playing brinksmanship with the Catastrophes. I'll add it.

> Some possible cards based on this (I have not included points
> seriously here, since I'm more interested in general ideas of
> contents and gameplay):

Points and dependancies will probably be balanced once we have a good
set of cards. I've added in the non-social ones (though, for instance,
"Space Habitat" seems more like an event: once you get a bunch of people
living in orbit, trying to ban it would probably have no effect).



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:47 MST