Darwinian Extropy

From: Tim_Robbins@aacte.nche.edu
Date: Sun Sep 08 1996 - 14:55:22 MDT


Max Moore writes in response:

<You might have no choice. You seem to assume that >H level
intelligences to be actively benign, leaving you a sufficient part of
resources. I suspect the scenario to be Darwinian/ALifish, where first
the easily accessible (Belt, Kuiper, Oort, atmosphereless satellites)
resources get rapidly depleted>

I don't assume that >H level intelligences will be benign by chance. I
assume we'll design their initial matrix behavior that way. Your
assumption of the resource insatiability of such future intelligences
seems possible but not probable, unless you assume that this insatiability
ceases with the consumption of the home system. Otherwise, isn't it
logical to assume that such insatiability would have already converted all
star systems and interstellar resources? Unless of course, we assume
we're the first to come up with the concept. It is possible we are first.
Somebody has to be.

Another point--all higher intelligences will have to have purpose, not just
program, otherwise you're liable to get a very high "suicide" rate the
greater the intelligences becomes in successive design generations. If
we are talking about self-aware intelligences. You may not be.

You also slogan:
<Darwin days, even in the digital Eden.>

This I see repeatedly as the limit of your analysis. A belief that
Darwinian competition and natural selection has as it's outcome the
triumph of the most EFFICIENT--especially in terms of energy and
chemical usage. Were this so, nature would never have "selected"
beyond the bacterium--which is still the winner today in the biological
fist-fight. But Darwinian evolution also involves the biological niche--the
debate is still out as to whether organisms just fill niches or create them.
And natural selection "within" a species is even more cumbersomea
quandry. And it usually comes down to the biological question of not
"what is efficient?" but rather, "what is sexy?" Often arbitrary, quite
different for a peacock or a person. The evolution of the intellectual and
cultural idiosyncracies of the human mind certainly lie outside of
mathematical equations of resource and reproductive efficiency. Most of
what humanity does every day as a species, and every day as an
individual has little to do with efficiency. I would agree that it's
competitive. But "within" our species, and more often, just within our
tribe. You don't see the Masi tribesmen trying desperately to become
stockbrokers because it's very competitive--A New York stockbroker
with a fair income has many more resources at his disposal, and a much
longer lifespan, and his children will have better also. So why don't the
Masi want to also? What is the "purpose" of art? What makes a
breeding partner sexy?

Point being, that a hyperintelligent mind is likely to be less subject to
mathematical behavior analysis than we are, not less. It's likely to be
less "logical" in it's reasons, that the arbitrariness of humanity. It's
certain to be more complex. A "social" group of such >H minds is likely
to be unfathenable in its behavior--and perhaps very inefficient and
illogical to outside analysis indeed.

As an economist, I can say that people are not "rational actors" not in
groups and not as individuals. We act based on "human" motivations.
Not what is most efficient. The most efficient humans are the ones to
always succeed. Were that true, geniuses would all be multi-millionaires
with hundreds of children.

Should we apply this efficiency analysis to corporations? Nations?

Somehow, it doesn't work out that way.

Max, do you really think that these >H entities will have the behavior
motivations of bacteria--just added to the power to disassemble any
planet that looks tasty?

-TLR



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:44 MST