From: Michael S. Lorrey (mike@lorrey.com)
Date: Sat Jul 10 1999 - 16:56:13 MDT
"Robert J. Bradbury" wrote:
>
> > "Michael S. Lorrey" <mike@lorrey.com> wrote:
> >
> > You are both wrongly assuming that all technological civilizations would
> > have similar exponent courves in technological/population development.
>
> I'm not making this assumption. Dyson argued 40 years ago, even
> if we slowed ourselves down to a 1% annual growth rate, we still
> reach a power consumption level of the entire solar output
> in only 3000 years! Another SETI researcher made the point
> it is difficult to imagine a society that could maintain a 0.000000...%
> growth rate for thousands (or millions) of years (either you have an
> accident or you decay away *or* you eventually evolve to your
> environmental limits).
A society may grow spatially and fill the volume of its solar system
with habitats, build a Niven ring or Dyson Sphere, may settle nearby
stars, and develop human equivalent artificial intelligence, yet not
NEED to develop computers beyond one or two magnitudes of human IQ.
Technological advancement is not a goal in and of itself, it merely
serves to optimize human existence, and it does so at economically cost
effective rates. Dyson's major error in his calcuations was to assume no
increase in efficiency of utilization, which is not only crucial to make
further growth cost effective, but establishes resource re-utilization
at rates of greater efficiency as a cost effective strategy, eventually
concluding in a species being able to utilize resources at more and more
efficient rates.
Since our population growth rate is already at around 1%, if we increase
the education and intelligence of the population more, the growth rate
will fall even faster, possibly into the negative percentages much like
european countries (which is why they can sustain lower economic growth
rates and maintain standards of living with the US). As longevity
increases in a population, the desire to reproduce decreases, thus
reducing the necessary economic growth rates to maintain improving
standards of living.
>
> Growth is a fact of life. It is built into Nature. Population biologists
> tell you that populations grow to the limits of the environment and then
> crash when conditions change for the worse. I would argue that it is
> highly difficult for Nature to evolve "internal" limits. The limits
> are "imposed" by the environment. Look what happens in situations
> involving the introduction of a non-indigenous species (say rabbits
> in Australia) -- if the environment is suitable the species expands to the
> allowable limits. In most other environments you don't see it
> because everything is in balance between the predators and the prey
> and the food resources and the reproduction rate.
Comparing the behavior of non-intelligent species to the long term
behavior of intelligent species if fallacious and fraudulent.
>
> I would also argue that by definition "technological civilizations"
> get on pretty exponential growth paths. Humans didn't have exponential
> growth (in fact we were barely surviving as a species) *until* we
> developed the technologies that allowed us to manipulate the
> environment in ways more sophisticated than our genetic program allowed.
And it is yet to be proven that some or all 'exponential growth paths'
are not paths of the third order, maxing out at some plateau, the
highest being that of light speed.
>
> I believe you have to make a concrete case that a developing technological
> species/civilization would consciously *choose* to terminate its growth.
> That means that you have to negate the fundamental self-preservation
> and/or reproductive instincts necessary for life. As I've discussed
> in other threads -- if you want to be immortal, you have to eliminate
> reproduction -- if you want to reproduce, you have to choose to die
> (or prevent the development of technologies that enable personal
> immortality). There *are* hard limits to growth. There may be a few
> examples of Vulcans in the galaxy, but they should not be in the majority
> (the majority would seem to be those species that take as much as
> they can and hold it the longest). The exception to that would
> appear to be cultures that follow a trans-humanist (trans-Natural-ist?)
> path where they mentally/genetically engineer out the drives that
> nature builds in.
Since greater intelligence and education result in lower population
growth rates, its not hard to imagine a future where the population
falls down to the levels of the previous hunter gatherer society (about
2 million per planet), yet each individual is of high intelligence, and
engages in much intellectual interaction outside of the feral/agronomist
lifestyle.
>
> As far as the exponential growth goes, we have a pretty good example
> in the computer industry and Moore's law (before that it might
> have been the industrial revolution and before that agriculture).
> Can you make a good argument that any of these paths could have
> been "consciously" arrested? If you want to volunteer to stop
> the $200 Billion/year+ electronics industry, I'll be happy to
> sit back and watch. If you can't stop it, then Moravec/Minsky
> would seem to have a case -- we may not know how to create
> intelligence (other than the good ole natural way), but if
> we keep at it long enough we should figure it out. Biotech
> enabled super-longevity and nanotech enabled ultra-longevity
> would seem to fall under the same development principles.
Moore's Law has yet to be shown to have no upper limit. Better yet,
light speed itself puts a limit on maximum growth of computational
technology. Making Moore's Law a Holy Mantra is an error of faith. Not
very scientific.
>
> > You are assuming that EVERY society will want to transcend, rather than
> > just staying at a comfortable early 21st century level.
>
> The environmental movement has been trying for 30-40 years to
> "stop" our growth without much success. The primary reason
> is hasn't worked is that we can siphon off a fraction of our
> productivity growth and technological capacities and
> apply these to solving the environmental problems. It is
> pretty clear at this point that we can develop the
> technologies to expand to the limits allowed on the planet
> and then off the planet. That realization should occur
> in any other technological civilization as well (if you
> wait long enough).
>
> Dr. Hekimi (the discover of the clk gene in nemetodes), once
> made the comment to me -- "if man can imagine it and it
> is possible, sooner or later he will do it". That seems
> very to true me, it seems to arise from the nature
> of competition and the direct or indirect advantages
> one derives from creating something new, different or better.
Your error is to assume that it will happen sooner rather than later.
Much the same errors the early christians made in expecting the 2nd
coming in their lifetimes. Sounds to much like a religious attitude to
me.
>
> If evolving to the limit of physics *is* feasible, and "life"
> is designed to "evolve", can you make a case for the cessation
> of evolution?
Evolution is a result of pressures against the survival of the
individual. Once the individual is practially immortal and intelligent
enough to handle most eventualities in the physical world, they have no
further need of evolution.
>
> > You are also wrongly assuming that following a singularity by some
> > percentage of the population that the rest of the population just
> > dissapears.
> No, not really. It doesn't matter in my mind whether
> (1) Bill Gates turns himself into an M-Brain and turns off the
> sun on the rest of us.
> (2) We all (every single individual who wants it) turns themselves
> into a unified collective M-brain and
> (a) Takes the Hydrogen in Jupiter and leaves the solar system,
> leaving behind the luddites who didn't want to join us.
> (b) Dismantles every single aggregate of atoms in the solar
> system (other planets, asteroids, earth (and the luddites
> on it), the sun, etc.) for reformation into an optimal
> computational architecture.
Your points here illustrate the proof of my statement. You refuse to
acknowledge that once some part of the population 'transcends' (quotes
being to denote your mystical attitude toward that condition), that the
remaining population will not automatically be transcended or be
destroyed. That we have stone age cultures in coexistence with our own
right now illustrates the fallacy of your argument. Your argument
assumes a level of evil and callousness in the motives of transcended
beings that I personally would take as an argument to stamp out all
efforts to transcend.
>
> The point would be that in in both (1) and (2) you still get an M-Brain
> and M-Brains seem to have lifetimes of the order of the longevity of
> the universe. In 2a the luddites probably have a maximum lifetime
> of a few billion years (until the sun becomes a red-giant), unless
> they decide to move the planet or "manage" the sun (then they aren't
> luddites any more). Since the M-Brains are now at the top of the
> evolutionary ladder (biggest, most intelligent, longest lived, able
> to anticipate and avoid any potential hazards, etc.) they have to become
> the most populous "species". [Survival of the fittest.]
>
> M-brains don't *have* to harvest or dismantle any of the
> luddites or their star (there is plenty of other material around
> from which to construct and power themselves at the time of the
> singularity). Whether they chose to behave that way may depend a
> lot on the path by which they develop -- a self-evolving
> AI with no "moral" code probably would consume us to optimize
> itself, on the other hand if the M-brain is constructed from
> uploads of us, it might harbor some nostalgia towards the Earth
> and/or the sun and leave them intact.
You fallaciously assume that an AI will not develop a moral code, as if
there is no objective morality. Sorry, null program.
>
> For all of this not to happen, I believe you have to make the case
> that substantially all of the individuals who are members of an
> evolving technological species (on the slippery slope towards the
> singularity), universally decide -- "This far and no further".
> The "anti-technology-police" would have to enforce the decision on the
> non-believers. As Ben Bova has pointed out in his recent "Immortality"
> book, that is a very difficult thing to do because of the benefits
> one personally derives from breaking the rules. [BTW, this book
> is worth reading -- see my review comment on Amazon.]
Read it. You still are evading the point. Just as there are stone age
cultures alive and functioning today, 20th century cultures will survive
into and beyond the time of any supposed date of 'singularity'. You need
to stop looking at it as some sort of Day of Rapture that all will
participate in or go to hell.
>
> [Yes, a technological civilization, might consist of a species
> that has a single or collective mind (instead of a collection
> of individual minds), but are they all?]
>
> You may believe that an M-brain is a bad idea, but I assure you,
> that if I get my hands on a nanoassembler first, I'm not stopping
> until I've got around 10^20 distributed replicated copies of myself
> [that leaves room for anyone else that wants to hop on the boat,
> since the idea of talking to that many copies of myself for the
> next 100 billion years or so seems really unpleasant... :-)].
If you do so and do not accept any objective morality of survival and
coexistence, then I will be sure to nuke you before you attain your
goal. Kapisch?
Mike Lorrey
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:27 MST