From: Adrian Tymes (wingcat@pacbell.net)
Date: Thu Mar 07 2002 - 19:47:00 MST
Anders Sandberg wrote:
> On Tue, Mar 05, 2002 at 06:33:53PM -0800, Adrian Tymes wrote:
>>We can only pump so many memes into a single simple game. Perhaps
>>another one can speak to the realities of tech deployment.
>
> A good idea. Maybe I could try out writing my own version, if you don't
> mind? My version would likely deal less with the race towards
> singularity and more with the politics and economics of technology.
Please, be my guest. (Not that I have any standing to grant permission
on this anyway, but if you want my permission here, you've got it.)
>>>Having a competition element is good for motivating people; the trick is
>>>to make the game a complex mixture of cooperation and competition.
>>>
>>Hmm. Perhaps if we pump up the random element? That is, when it is
>>the world's turn to play, play as many cards as there are players. You
>>can pass on your turn if you want, shuffling your hand into the deck and
>>drawing a new hand...but the world will continue apace. You can either
>>enjoy the ride, or try to steer it towards your goal. For any given
>>technology or event, you may be given the opportunity to put it into
>>play...but if you don't, someone else eventually will.
>
> This is a good idea. Even if you are a die-hard luddite you better
> participate than just sit back. Depending on the size of the deck
> certain cards will likely not re-appear very often, so getting rid of
> them is semi-permanent.
Playing them is semi-permanent. Getting rid of them...you can shuffle
them into the deck, but the only "discarded" things are Unbans and Bans
once they cancel each other out. Reason: for any given tech, you may be
given the opportunity to implement it...but if you pass, someone else
eventually will. Your choice is now vs. later.
> Hmm, maybe one could even play this game luddite-wise: you try to ban
> all technologies so that singularity *and* disaster are impossible. For
> a luddite win to happen all technologies enabling new stuff have to be
> banned, and these bans have to be upheld for a certain time. Sounds like
> a real challenge.
Umm...actually, I'm deliberately limiting the number of Bans and Unbans
to make this infeasable. Sit back, only banning stuff...you can do that
for a while, but there's only so much of that kind of political capital
floating around. If that's all you do, then eventually, you'll run out
and the world will start progressing without you.
Though a related possibility: try to invoke one of the three types of
Catastrophes. Three players only, of course.
> But this seems to assume that "the solution to tech problems is more
> tech", and that the true goal should be some kind of balanced form of
> technology. Personally I don't see why you can't have a singularity
> based only on biotech (maybe something like Blood Music or the Edenists
> of Peter F. Hamilton), AI or nanotech. This might be more of a game
> solution, of course - it is rather neat to have this kind of gliding
> thresholds.
Problems:
* Biotech only - ok, we've upgraded our bodies, but we're still limited
to that which we can produce biologically, or mine and refine by
relatively crude industrial processes. Our minds are not any more
advanced; neither do we have companions or aides much smarter than
ourselves to call upon. Life remains mostly predictable, though
much longer.
* Nanotech only - without AI to control the nano, only crude processing
is possible. (Diamondoid space elevators? Sure. Nanites to repair
cellular damage? Maybe...and a single cell at a time, as controlled
by a person. Anything more complex than that? Nope.) And we,
ourselves, remain mostly unchanged from our current forms. Life
remains mostly predictable, though we do have more neat toys.
* Robo/AI only - our bright children, the AIs, may theorize and
philosophize all they want...but without nanotech or biotech to
synthesize things, manufacturing costs mean their ideas for changing
the world take years to implement, just like Big Ideas do today, thus
limiting the pace of significant change. In addition, previously
existing humans can not join the advanced intelligences, for we do not
know how to merge them and us, or how to make either side become the
other. Life for humans remains almost totally unchanged; even any
given AI does not usually experience radical change over the course of
a few days.
The core concept is the purity of each type, to the negligence of the
others. The solution to most problems is, in part, to find and develop
(and apply) a solution to the problem...which, in this context,
translates to more tech to solve imbalances in previously deployed tech.
I'm also thinking of perhaps more Event cards like High Tech Terrorist,
embodying other ways that the world could destroy itself...unless
society has deployed solutions to stop that way first. True, few people
would want to play such a card deliberately...but the deck itself cares
not for which cards come from it when it is the world's turn.
>>>Perhaps a better way of handling it would have a "tolerance" level for
>>>the different points. It represents how much the world can handle, what
>>>institutions can deal with issues and how much people have adapted to
>>>it. The points must remain below the tolerance for the world to work;
>>>tolerance is increased using either safety technologies or perhaps
>>>direct payment of money (representing the building of institutions). To
>>>reach singularity the world needs to adapt. This seems to be closer to
>>>the memetic goal of the game and transhumanism.
>>>
>>So why not just do pure tolerance (at least, as pure as you can get)
>>*and* supress all new technologies? This seems to be just the same
>>abstract numbers you were objecting to.
>
> Sure, but given the assumptions about costs I had made, every player
> would eventually run out of money. Then they would not be able to
> prevent new tech from emerging.
As implemented by a limited number of Ban and Unban cards.
> Sure. But how is this different from the social risk described by Orwell
> as "If you want a picture of the future, imagine a boot stamping on a
> human face--for ever"? With ubiquitious law enforcement, paranoid
> culture and AI enforcement you could get it even if all AI is obedient,
> all biotech under control and the nanotech under lock and key. It might
> even be self-reinforcing and impossible to get rid of. It is the social
> version of gray goo, a permanently entrenched society that does not
> promote human growth.
What you describe is not so different from the robo Catastrophe: a
mindless machine wiping out free humanity; the machine just happens to
be composed of organic robots instead of metallic/plastic/ceramic ones.
Given as AI would be required to effectively implement such a scheme
worldwide, I'd say the robo Catastrophe more or less covers it...though
perhaps I should note that explicitly.
> What I worry about in your system of bans, is that it suggests that
> banning technologies is a good thing and that it does not carry any
> cost. If antibiotics or the Internet are banned, in the real world this
> would cause hundreds of thousands of deaths and billions in economic
> losses. In the game it would remove a few Bio or Robot points.
See above about game-ending Event cards. Banned technologies would not
exist so far as said cards are concerned.
>>>I think it is
>>>important not to isolate the game from social reality. If it is intended
>>>to convey a transhumanist point of view it better demonstrate that
>>>technology alone isn't enough, we better make sure our culture isn't
>>>turned into something nasty as we advance.
>>>
>>Again, define "something nasty". For any given permutation, some of the
>>audience will be predisposed to think it's actually a *good* thing...so
>>better to just avoid that topic entirely, no?
>
> It is your game, and you may do with it as you like, but I think leaving
> out issues like this would make it less interesting and actually less
> likely to spread the positive memes you would like to spread. We already
> have enough games and scenarios around where technological development
> is pursued for its own sake, and far too few that dares to look at how
> society interacts with technology.
The base target is mere introduction to the concept of Singularity, and
a mild blessing of it as a good target. Again, if you wish to develop a
game that looks at the social realities of deploying certain specific
technologies, go ahead. Such a game might be viewed as what happens
when, in the SCG's abstraction, someone just plays a certain Technology
card, or maybe a few cards.
> This is one of the most obvious weaknesses of transhumanism today, and
> many of our critics latch on to it: we do not integrate our
> technological visions with social visions, and that either suggests that
> we do not care the least about the rest of humanity,
Some of us do, despite the efforts of those humans who see our visions
as the biggest threat to life and limb currently in existence, and act
accordingly. It's hard to care for those who not only trying to kill
you*, but utterly discredit everything you believe in...but some of us
manage it.
* Not exaggerating here, BTW. I have received death threats over my
beliefs in the past. That I do not now, I attribute mostly to the fact
that I have learned not to associate with people who would do such a
thing.
> that we naively
> think technology is the only thing that matters
Depends on how broadly or narrowly one defines "technology". Is it
technology when one discovers a better way to live, if that way just
happens to require certain modern inventions in order to be practical?
> or that we have unsavory
> social visions we do not reveal.
See above. I envision life radically enhanced by new technologies, with
material prices dropped through the floor compared to current levels,
where anyone can modify their own bodies as they please, and ability for
anyone to be beyond the reach of the law if they choose (so long as they
do not harm anyone under the law's protection). This would classify me
as a dangerous lunatic in any social circle that believes in keeping
control over everyone, therefore prudence demands that I not reveal it
in most cases.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:50 MST