Re: Singularity or Holocaust?

From: Anders Sandberg (asa@nada.kth.se)
Date: Thu Feb 26 1998 - 12:06:42 MST


Paul Hughes <planetp@aci.net> writes:

>I have pondered with great interest some of the recent postings on the
>singularity and post-humanism. One of the passages that caught my eye
>was by den Otter posted to the list on Feb 21. It reads as follows:
>
>>'Unless trasnhumanists get somehow organized the chances that any of
>>us will make it past the singularity are close to zero; the powers
>>that be will crush us like worms. Only (some of) the rich and
>>powerful are going yo make it.'
>
>I have to say I can't argue with this - this seems liek a *very* real
>scenario. Does anybody see it differnetly? And if so, how and why
>have you come to a different set of conclusions?

There are two questions here that needs to be answered: what do we
mean with the singularity, and what social dynamics will we see around
it. Unless we answer the first the rest of the discussion will be just
a loud waste of bits. All too often the singularity becomes just
another name for the rapture, some kind of techno-escaton that is left
conveniently undescribed and magical.

What we know is that a lot of growth curves are nearly exponential
(interestingly no longer the population curve, if the new figures are
to be believed) and that it seems possible that they can get even
steeper (vide Hans Moravecs discussion of Moore's "law"). Will they
all become asymptotically vertical in a finite time (the Vinge
singularity), have an inflexion point and then settle towards a very
high but finite level (the sigmoid singularity) or does the
singularity mean no predictions about the future are possible (the
horizon singularity?).

If we speak about the horizon singularity this discussion is fairly
pointless; den Otter's exhortation is still a good idea for a number
of reasons, but we cannot discuss what we need to do to survive the
singularity since it is by definition unknowable. Boring.

If we speak about the other two singularities it becomes clear that
change is the keyword here. Can we and what we enjoy survive
steepening change? That is a very good question.

One of my favorite devices is "exponential change makes differences
grow exponentially". If I have good intelligence amplification, then I
can use it to earn enough to buy or develop better IA equipment, and
so on. My neighbor, who isn't quite as bright will still improve, but
will lag after more and more while I soar towards transcension. The
same can be said about various economies: many poor nations are
actually doing reasonably well, but compared to us they appear almost
static. And guess who will afford the scientists, engineers,
businessmen and other people who will make growth even faster?

There are equalizing forces too. Many of the results from expensive
and time-consuming research appear openly in journals and on the net,
making it possible for others to leapfrog. Technology spreads, and
rich groups can and do give some of their surplus to others
(voluntarily or not). The problem is of course: are these forces
enough to keep the differences finite? I doubt it.

I will try to model this like N coupled differential equations,
where the solutions Y_n(t) are growing exponentially but linked by
diffusion terms. Something like

Y'_n = k*Y_n + l*(sum Y_i - N*m*Y_n) = (k-lmN)Y_n + l*sum Y(i)

If l is large enough, then every Y_n will tend to
approach each other, and we smaller differences over time. This
corresponds to strong diffusion where sharing costs you m (this is
essentially growth with redistribution). If the cost m is zero (like
in broadcasting it on the net), then the solutions do not approach or
diverge, their internal differences remains but do not grow. In fact,
if you look at the derivatives of the differences you get

(Y_i-Y_j)' = (k-lmN)(Y_i - Y_j)

which shows that if m is low or zero, then the differences will grow
regardless of how much help the advantaged give the disadvantaged;
just helping others wont bridge the gap, you need to slow down
yourself to do that (and that might put you at an
disadvantage). Sharing is also more likely when m is small than when m
is big. The N term is however a bit hopeful, the more players there
are, the more differences will be averaged out and the overall support
from others will become dominant.

m seems to be decreasing rather than increasing in an information
society like ours, k is already fairly high and l has obviously not
been high enough in the past to lessen differences. N might be
increasing, though, as new players enters the global arena.

(I actually wrote a small MatLab script to play with this while
writing this article, I have appended it at the end. Nothing special,
but useful for thinking graphically)

So it seems likely that differences will grow rather than shrink as
the singularity or any other period of fast, self-supporting change is
approached. Now, given the human tendency for envy and fear of anybody
who get's too powerful, this suggests that tensions will increase long
before the truly transhuman stage (no need to invoke the Powers). Even
if most of the have-nots profit from the haves, there will likely be
some who resent them for various reasons.

How these tensions are released is a tricky question which depends on
the relative power of the haves and have-nots. If the have-nots
cannot seriously hurt the haves, then nothing will likely happen
except that the haves will grow ever stronger (faster than the
have-nots, who will still grow) with plenty of mutual ill
feelings. It is not given that the haves would want to stomp out the
have-nots if m is low or zero - as long sum(Y_i) is larger than m*Y_n
they will profit from the occasional useful idea from the other
side. It is once m*Y_n becomes larger than the sum the situation
becomes improfitable, but then it is just easier to break contact
(aha, the economic reason for transcending?) (cost: 0) than remove the
have-nots (cost: finite).

So this scenario contains one part of humanity transcending, leaving
the rest (by now fairly advanced) in the dust. This part can still
transcend, and we might even get a multipart singularity. The long
range fate of everybody depends a lot on tricky questions about
resources: are the have-nots economical as feedstock? do the Powers
need to struggle for resources in the solar system? It should be noted
that if the sigmoid theory of technological growth holds, then the
Powers will reach a point of diminishing returns where k and l will
begin to decrease (say like k0/(1+Y_n)) and the differences will
eventually start to decrease again as everybody eventually reaches the
same technological ceiling. At this point ordinary economics likely
takes over again, there is no real point in struggling with each other
since it is more profitable to wotk together at solving the remaining
problems.

The other scenario involves that the have-nots can hurt the haves, say
through terrorism. In this case it will be in the interest of the
haves to either grow so fast that they become invulnerable (this
assumes they can beforehand figure out that "at tech level X nothing
the others can do could hurt us", a doubtful assumption, especially
since the others are learning from the haves), decrease the
likeliehood of terrorism or neurtralize the have-nots. The problem
here is that the more powerful technologies appear in the world, the
easier it seems to create a terror weapon (the shields are not keeping
pace with the spears - it might be interesting to ask under what
circumstances this could change). Since there is a finite chance that
somebody somewhere might use a terror weapon, the situation becomes
less and less stable as technology advances (unless a Cold War
standoff can emerge, which requires *strict* control of the relevant
technologies by *few* groups). So even if the haves are spending
resources on goodwill, there is always a chance that somewhere in a
remote corner a lunatic could be building a weapon of mass destruction
using fairly old tech (fertilizer, anybody?). Goodwill just lowers the
risk, it does not remove it (this is true regardless of the
differences between haves and have-nots, it seems to be a general
problem for sufficiently advanced and violent civilizations). So the
obvious solution would be to neutralize the have-nots. Traditionally
this has been done using extinction, but that might not be ideal (the
have-but-not-so-muchs just below the haves might get paranoid, and
since both sides understand this they might not want to start a trend
that could spell the doom for both of them) and other possibilities
could be used: deliberate steering away from dangerous technologies
and ideas, gnatbot police or even subtle brainwashing. This comes at a
noticeable cost, including loss of useful help from the have-nots
(remember that they are still useful up to very high differences) and
the risk of being discovered and attacked (once you have become
paranoid, you cannot stop - it is a memetic cancer).

So a lot will hinge on the exact distribution of
power/knowhow/extropy/whatever between the players. At present the
distribution has a noticeable tail in the positive direction (my guess
is that it is some form of x exp(-x) curve). As this distribution
evolved, it will become broader, carried along the dynamics discussed
above. If we make the oversimplifaction of seeing it as roughly
exponential, then the distribution f(x) at time 0 will become
f(exp(tx)) at time t. Even a fairly narrow distribution will become
broader and develop a tail.

If there exists a continuum of players between the foremost haves and
the last have-nots, then the strains will be more distributed and
every player would have to deal with both greater, lesser and equal
players. This might be stabilising, since no single player would
easily be able to control the whole system - there are players at
least of equal power and a significant number of less powerful players
that could ally against it. Even the most powerful player would have
to deal with the total mass below. Only if one group could separate
itself significantly from the others would it be possible for it to
neutralize the others, and any such separation moves would naturally
be keenly watched by others. This suggests that the possibility of
secretly planned jumps ahead as a powerful destabilizing force - if
they are possible, then it might be possible for a player to become
dominant before anybody can react. If these jumps really are possible
is both determined by technology/physical law (e.g. could a dominant
technology such as (say) workable brainwashing nanites be developed?
Could espionage become soo good that it forces openness?) and policy
(do the leading players demand openness from each other?). It seems
that the "weapon of openness" is quite cruicial here. Since nobody
wants to become neutralized and have everything to win by cooperating
with the others, a mutual openness policy would be desirable for all
players, unless they have reason to believe such an openness could be
subverted.

Will the haves remain cohesive? Their internal differences would have
exactly the same dynamics as for the others, and since they would be
growing the fastest, the strains would be largest. It is rather likely
this model does not work identically on the smaller scale of such a
group (e.g. the constants k,l and m are different in groups than
between groups). This could lead to cohesive groups diverging from
each other (potential for inter-group conflicts) or that groups loose
cohesion. If the growth dynamics is dependent on the size of the
groups (for example, due to the need of specialization in a complex
field), then this loss of cohesion would also act as a limiting factor
on the growth of the have's, enabling the have-nots to catch up
slightly. On the other hand, as I have discussed above, there is no
reason much more advanced individuals could not work together with
less advanced individuals if both profit from it due to their
different specialisations. This requires further consideration (a
first rough model might be to decrease k and l inversely
proportionally to Y_1, which would make the model identical to the
sigmoid model mentioned above. However, the response is likely more
nonlinear).

Conclusions:

It seems very likely that the differences between different groups of
humans at least on some scales will grow exponentially in the near
future. Traditional solutions to lessening differences
(redistribution) fail when sharing becomes cheaper (as in an
information society). The appearance of a large number of interacting
players might make the growth period easier. The growing differences
are likely to cause strains forcing the leading edge to either speed
ahead to a safe region, attempt to control the situation or neutralize
the threat of the have-nots. Only the first alternative appears to be
completely stable, the others have noticeable risks. The leading edge
will suffer from the most intense differences, and might be hampered
in its growth for these reasons. Openness seems to be one of the best
ways of handling the risks of "jumping ahead" and gap formation, as
well as allowing the have-nots to assure the haves that they have no
evil plans in store.

Well, this got a bit out of hand (I had only planned to write a simple
response), but I hope this response shows why I remain optimistic
about the future but at the same time think it will become much too
interesting for us to like :-)

----------------

% Number of players
n=10;
% Timestep
h=0.1;

% Rate of advance
k=1;
% Technology diffusion
l=0.06;
% Cost of sharing
m=0;

y=abs(randn(1,n));
mm=[];

for i=1:10,
ys=sum(y);
yp=k*y+l*(ys-m*n*y);
y=y+yp*h;
mm=[mm; y];
end,
plot(log(mm)),

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:48:39 MST