"den Otter" <neosapient@geocities.com> writes:
> ----------
We might not allow selling nuclear weapons to Saddam, but free spread
of medical technology is encouraged.
> > From: Dan Clemmensen <Dan@Clemmensen.ShireNet.com>
>
> > I've read their entire "civilization", but not "Lessons of History."
> > I think that the conclusion overlooks the likelihood of altruism
> > among the potential immortals. If even one of the immortals has
> > even the slightest amount of altruism, then the technology will
> > be be disseminated to the populace as a whole.
>
> Assuming that the other immortals allow it (there could be a codex
> against proliferation of transhuman tech, much like the current one
> against the proliferation of nuclear weapons).
> With (near-)perfect surveillance being easy for posthumans, any "illegal"
> development on earth could easily be nipped in the but.
I wonder about perfect surveillance; I was arguing with Nick Boström a bit about it after TransVision, and I wonder if you really could develop a perfect surveillance system when the enemy might be posthuman too. To make things even worse, a defecting posthuman might make scratch monkey posthumans that would do the defection "of their own free will" with no trace to the originator.
> > The argument against
I disagree, I would say you gain much more. An industrialized third
world would increase utility production worldwide tremendously - both
as direct producer and as trading parters to everybody else. Compare
to the Marshall plan: everybody was better off afterwards despite the
increased destructive abilities of Europe (remember that de Gaulle
proudly boasted that his missiles could be turned in any direction,
including his former benefactors).
> > this assumes a zero-sum game in which a gain by the unwashed masses
> > equates to a loss by an immortal. Information doesn't work this
> > way.
>
> Information is power, and by allowing millions of people to become
> god-like too, you multiply the risk of something going wrong by
> (approximately) the same amount. To the already fully autonomous
> posthumans this might not seem like a very good idea; there's more
> to lose than to gain.
For your scenario to hold, the risk posed by each posthuman to each other posthuman must be bigger than the utility of each posthuman to each other posthuman. But network economics seems to suggest that the utilities increase with the number of participants, which means that they would become bigger than the risks as the number of posthumans grow.
Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y