From: Anders Sandberg (asa@nada.kth.se)
Date: Tue Mar 05 2002 - 16:34:02 MST
On Tue, Mar 05, 2002 at 01:32:05PM -0800, Adrian Tymes wrote:
>
> If you make it too complex, the target audience - non-sophisticated
> people who have open minds - will shrug and pass on to the next
> distraction. The primary reason for creating this game is not the game
> itself, but memetics.
OK. But if you do not create a game that is playable and fun, it won't
matter what memes you try to spread through it.
I would love to have a very clean game that manages to convey core
transhuman ideas.
> >The goal should be to make the situation such that everybody can
> >profit from developing dangerous technology, but that cooperation is
> >necessary to uphold bans. Some technologies that are very useful
> >(like nanoimmune systems) also require dangerous technologies.
>
>
> Prerequisites are already explicitly allowed for. Adding in the profit
> motive taints the meme with "tech is developed so the rich get richer",
> not "tech is developed to make the world a better place". While they
> are both true, it is better to promote only the former meme, not the
> latter.
I think it is actually a bad thing. First, it spreads an erroneous image
of how technology is developed (technology is built because somebody
somewhere decided mankind should have it, with no links to economics,
culture or happenstance, so technology can simply be regulated by
controlling that guy), and this image is already doing a lot of damage
to the transhuman project (for example political attempts at
relinquishment). The altruist model of tech development is even worse,
since it so easily borrows itself to centralism and utilitarian
projects. Both are dangerous oversimplifications, missing the true
complex mixture of motivators of development - people develop
technology both due to altruism, profit, out of curiosity and random
ideas, as shaped by the surrounding society.
Having a competition element is good for motivating people; the trick is
to make the game a complex mixture of cooperation and competition.
> >The game ends in a disaster if too much risk points are accumulated.
> >For example, Nano-manipulators have just +1 Nano risk (they are in
> >themselves not very risky) while Desktop Fabrication has perhaps +3
> >risk (it is an application) and Free Replicators +10 risk. Risk can
> >be reduced by developing safeguard technologies such as Nanoimmune
> >Systems (-10 Nano Risk) and policies such as Secure Nanolabs (-2
> >Nano Risk). So the race is to gain enough points while not allowing
> >the risk to grow too large.
>
>
> The point is that the knowledge itself, and the world's possession of
> it, *is* the risk. And the promise. The two are the same.
Remember the goal. If you claim this, then many players will realize
that the sure way of winning the game is not to play the game. What are
the benefits of the technologies? To the player there are none in the
current form, just an increase of abstract numbers. Even a "Clinical
Immortality" card doesn't cut it as a motivator, since it would maybe be
a motivator for the humans in the game world, but does not per se mean
much for the player. The singularity is just the end, and has no other
meaning. Instead of promoting progress the message seems to be that
increasing these technologies increases risk, and the singularity is
just someone winning. That doesn't seem very extropian to me.
Perhaps a better way of handling it would have a "tolerance" level for
the different points. It represents how much the world can handle, what
institutions can deal with issues and how much people have adapted to
it. The points must remain below the tolerance for the world to work;
tolerance is increased using either safety technologies or perhaps
direct payment of money (representing the building of institutions). To
reach singularity the world needs to adapt. This seems to be closer to
the memetic goal of the game and transhumanism.
> >I would like to add a fourth kind of risk: Social risk. While
> >technologies may cause other risks, bans make society more
> >dangerous. If the Social risk goes too high, we end up with a social
> >disaster. The risks can easily be represented by markers moved along
> >a line.
>
> Problem: what is a disaster for some may be heaven for others. The
> world will not end if, say, America becomes a police state under martial
> law, even if it would suck. The world would recover from such a state
> within (current) human lifespans.
The biotech gray goo scenario of Greg Bear's _Blood Music_ seems rather
nice from my point of view - does this mean that we should regard the
Bio/Nano disaster in the game as similarly relative? I think it is
important not to isolate the game from social reality. If it is intended
to convey a transhumanist point of view it better demonstrate that
technology alone isn't enough, we better make sure our culture isn't
turned into something nasty as we advance.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:47 MST