Re: SETSIs (was Re: seti@home WILL NOT WORK)

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sat Jul 10 1999 - 03:29:00 MDT


> "Michael S. Lorrey" <mike@lorrey.com> wrote:
>
> You are both wrongly assuming that all technological civilizations would
> have similar exponent courves in technological/population development.

I'm not making this assumption. Dyson argued 40 years ago, even
if we slowed ourselves down to a 1% annual growth rate, we still
reach a power consumption level of the entire solar output
in only 3000 years! Another SETI researcher made the point
it is difficult to imagine a society that could maintain a 0.000000...%
growth rate for thousands (or millions) of years (either you have an
accident or you decay away *or* you eventually evolve to your
environmental limits).

Growth is a fact of life. It is built into Nature. Population biologists
tell you that populations grow to the limits of the environment and then
crash when conditions change for the worse. I would argue that it is
highly difficult for Nature to evolve "internal" limits. The limits
are "imposed" by the environment. Look what happens in situations
involving the introduction of a non-indigenous species (say rabbits
in Australia) -- if the environment is suitable the species expands to the
allowable limits. In most other environments you don't see it
because everything is in balance between the predators and the prey
and the food resources and the reproduction rate.

I would also argue that by definition "technological civilizations"
get on pretty exponential growth paths. Humans didn't have exponential
growth (in fact we were barely surviving as a species) *until* we
developed the technologies that allowed us to manipulate the
environment in ways more sophisticated than our genetic program allowed.

I believe you have to make a concrete case that a developing technological
species/civilization would consciously *choose* to terminate its growth.
That means that you have to negate the fundamental self-preservation
and/or reproductive instincts necessary for life. As I've discussed
in other threads -- if you want to be immortal, you have to eliminate
reproduction -- if you want to reproduce, you have to choose to die
(or prevent the development of technologies that enable personal
immortality). There *are* hard limits to growth. There may be a few
examples of Vulcans in the galaxy, but they should not be in the majority
(the majority would seem to be those species that take as much as
they can and hold it the longest). The exception to that would
appear to be cultures that follow a trans-humanist (trans-Natural-ist?)
path where they mentally/genetically engineer out the drives that
nature builds in.

As far as the exponential growth goes, we have a pretty good example
in the computer industry and Moore's law (before that it might
have been the industrial revolution and before that agriculture).
Can you make a good argument that any of these paths could have
been "consciously" arrested? If you want to volunteer to stop
the $200 Billion/year+ electronics industry, I'll be happy to
sit back and watch. If you can't stop it, then Moravec/Minsky
would seem to have a case -- we may not know how to create
intelligence (other than the good ole natural way), but if
we keep at it long enough we should figure it out. Biotech
enabled super-longevity and nanotech enabled ultra-longevity
would seem to fall under the same development principles.

> You are assuming that EVERY society will want to transcend, rather than
> just staying at a comfortable early 21st century level.

The environmental movement has been trying for 30-40 years to
"stop" our growth without much success. The primary reason
is hasn't worked is that we can siphon off a fraction of our
productivity growth and technological capacities and
apply these to solving the environmental problems. It is
pretty clear at this point that we can develop the
technologies to expand to the limits allowed on the planet
and then off the planet. That realization should occur
in any other technological civilization as well (if you
wait long enough).

Dr. Hekimi (the discover of the clk gene in nemetodes), once
made the comment to me -- "if man can imagine it and it
is possible, sooner or later he will do it". That seems
very to true me, it seems to arise from the nature
of competition and the direct or indirect advantages
one derives from creating something new, different or better.

If evolving to the limit of physics *is* feasible, and "life"
is designed to "evolve", can you make a case for the cessation
of evolution?

> You are also wrongly assuming that following a singularity by some
> percentage of the population that the rest of the population just
> dissapears.
No, not really. It doesn't matter in my mind whether
  (1) Bill Gates turns himself into an M-Brain and turns off the
      sun on the rest of us.
  (2) We all (every single individual who wants it) turns themselves
      into a unified collective M-brain and
        (a) Takes the Hydrogen in Jupiter and leaves the solar system,
            leaving behind the luddites who didn't want to join us.
        (b) Dismantles every single aggregate of atoms in the solar
            system (other planets, asteroids, earth (and the luddites
            on it), the sun, etc.) for reformation into an optimal
            computational architecture.

The point would be that in in both (1) and (2) you still get an M-Brain
and M-Brains seem to have lifetimes of the order of the longevity of
the universe. In 2a the luddites probably have a maximum lifetime
of a few billion years (until the sun becomes a red-giant), unless
they decide to move the planet or "manage" the sun (then they aren't
luddites any more). Since the M-Brains are now at the top of the
evolutionary ladder (biggest, most intelligent, longest lived, able
to anticipate and avoid any potential hazards, etc.) they have to become
the most populous "species". [Survival of the fittest.]

M-brains don't *have* to harvest or dismantle any of the
luddites or their star (there is plenty of other material around
from which to construct and power themselves at the time of the
singularity). Whether they chose to behave that way may depend a
lot on the path by which they develop -- a self-evolving
AI with no "moral" code probably would consume us to optimize
itself, on the other hand if the M-brain is constructed from
uploads of us, it might harbor some nostalgia towards the Earth
and/or the sun and leave them intact.

For all of this not to happen, I believe you have to make the case
that substantially all of the individuals who are members of an
evolving technological species (on the slippery slope towards the
singularity), universally decide -- "This far and no further".
The "anti-technology-police" would have to enforce the decision on the
non-believers. As Ben Bova has pointed out in his recent "Immortality"
book, that is a very difficult thing to do because of the benefits
one personally derives from breaking the rules. [BTW, this book
is worth reading -- see my review comment on Amazon.]

[Yes, a technological civilization, might consist of a species
that has a single or collective mind (instead of a collection
of individual minds), but are they all?]

You may believe that an M-brain is a bad idea, but I assure you,
that if I get my hands on a nanoassembler first, I'm not stopping
until I've got around 10^20 distributed replicated copies of myself
[that leaves room for anyone else that wants to hop on the boat,
since the idea of talking to that many copies of myself for the
next 100 billion years or so seems really unpleasant... :-)].

There is one objection to all of this and that is that the waters
of the singularity slope are so rough that virtually *all*
civilizations capsize trying to navigate them. However you
have to invoke a grey goo type scenario that so totally destroys
the civilization that it never recovers to approach the singularity
river ever again.

Robert



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:26 MST