From: Anders Sandberg (asa@nada.kth.se)
Date: Wed Jun 16 1999 - 10:19:58 MDT
"den Otter" <neosapient@geocities.com> writes:
> > From: Anders Sandberg <asa@nada.kth.se>
> > > Assuming that the other immortals allow it (there could be a codex
> > > against proliferation of transhuman tech, much like the current one
> > > against the proliferation of nuclear weapons).
> >
> > We might not allow selling nuclear weapons to Saddam, but free spread
> > of medical technology is encouraged.
>
> Yes, but that's because medical technology is relatively harmless.
> If you allow powerful transhuman tech such as intelligence
> enhancement techniques and mature nanotech to proliferate you can
> hardly control what it will be used for, and you can bet your life
> that it will be used for "evil" sooner or later. Rather sooner than
> later, btw.
Is medical technology harmless? It can be used to create biological
weapons, and there was at least one media hoax that Saddam had looked
into cloning. Any technology has dangerous possibilities, but some are
more obvious than others.
> And of course, the scientific
> > community is displaying its discoveries for all to see.
>
> But posthumans could much easily keep their advances to themselves.
Would they? I can very well imagine posthuman idealists continue
sending articles to Nature. As Tipler pointed out, in any sufficiently
advanced and pluralistic society there is bound to be some people
wanting to interact with the primitives for whatever reason.
> > I disagree, I would say you gain much more. An industrialized third
> > world would increase utility production worldwide tremendously - both
> > as direct producer and as trading parters to everybody else. Compare
> > to the Marshall plan: everybody was better off afterwards despite the
> > increased destructive abilities of Europe (remember that de Gaulle
> > proudly boasted that his missiles could be turned in any direction,
> > including his former benefactors).
>
> Well, there's one crucial difference between humans and
> posthumans: the former *must* cooperate in order to survive
> and to get ahead, while the latter are fully autonomous,
> self-contained systems.
Look out for the dangerous little word "are". You assume a lot with
it. Maybe it is feasible for posthumans to be autonomous and
self-contained, but it might not be economical. If posthuman 1 and 2
specialize, they can become more efficient at producing
energy/matter/nanotech/fnord than any of them individually, and hence
be better off by trading instead of being fully autonomous. It seems
that you need to get rid of all profits of scale for autonomous
posthumans to be economical.
> > For your scenario to hold, the risk posed by each posthuman to each
> > other posthuman must be bigger than the utility of each posthuman to
> > each other posthuman.
>
> And that's exactly what would be the case; other entities are useful
> because of their added computing/physical power, but if you can
> add infinite amounts of "slave" modules to your brain/body, why bother
> with unpredictable, potentially dangerous "peers"?
Can you add infinite amounts of slave modules? That assumes a lot
about your cognitive architecture - you seem to have a very definite
view of what posthumans will be like, which you base your reasoning
on, but which remains assumptions until shown otherwise. As for
infinite expandability, that is not obviously possible, can you give
any support for it?
I would also like to point out that one of the most useful
resources/abilities in a highly advanced information society is
alternate viewpoints. Problem solving has been shown to work better if
the cooperating agents involved have different approaches than if they
are variations of the same basic agent. Diversity is a competitive
advantage, which means that a diverse group of posthumans can often
outcompete a similar clone of a posthuman.
> Of course, it is
> unlikely that there would be just one supreme posthuman, so they'd
> have to compromize and declare a "pax posthumana" based on
> MAD, very much like the cold war I presume.
Note that you here assume that offense is efficient and deadly, which
is another assumption that may not hold. If I can hide in space, I
will be hard to find while the others near earth will be visible - so
your scenario would force all posthumans into total hiding. And if it
is possible to protect oneself (for example by being distributed),
then MAD may not even hold.
> > But network economics seems to suggest that the
> > utilities increase with the number of participants, which means that
> > they would become bigger than the risks as the number of posthumans
> > grow.
>
> If a posthuman is so smart, and already (practically) immortal, surely
> it could develop all its utilities by itself in due time? Economies are
> typically a construct of highly limited creatures that must specialize
> and cooperate to survive.
Isn't a posthuman a highly limited creature? It is still subject to
the same physical laws, same limitations of space, time, entropy,
energy and matter humans are. It may just be better at exploiting
them. And as Marvin Minsky pointed out, as soon as there is *anything*
that is scarce economics becomes relevant. Maybe you could build
everything for yourself given enough time, but isn't it easier to
trade with someone who already has?
As I explained above, trading posthumans are likely to be noticeably
more efficient at exploiting resources than isolated posthumans. They
also have less reason to attack each other, making it possible to
focus defenses outwards rather than inwards, increasing their (mutual)
strengh. Also, the emergence of posthumans is not likely to be a
binary jump (as I have argued at length here and on the transhumanist
list) which means there will be a spectrum of more or less
trans/post-humans around which have incentives in participating in
such trade.
-----------------------------------------------------------------------
Anders Sandberg Towards Ascension!
asa@nada.kth.se http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:12 MST