SETSIs

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sat Jul 10 1999 - 22:55:00 MDT


> "Michael S. Lorrey" <mike@lorrey.com> wrote:
>
> A society may grow spatially and fill the volume of its solar system
> with habitats, build a Niven ring or Dyson Sphere, may settle nearby
> stars, and develop human equivalent artificial intelligence, yet not
> NEED to develop computers beyond one or two magnitudes of human IQ.

Correct. But this does not constitute an argument that *all* civilizations
will not go beyond this level.

> Technological advancement is not a goal in and of itself, it merely
> serves to optimize human existence, and it does so at economically cost
> effective rates. Dyson's major error in his calcuations was to assume no
> increase in efficiency of utilization, which is not only crucial to make
> further growth cost effective, but establishes resource re-utilization
> at rates of greater efficiency as a cost effective strategy, eventually
> concluding in a species being able to utilize resources at more and more
> efficient rates.

Huh? Where does it say this. I've got the Dyson paper (and can send
you a copy) and as far as I can tell it says nothing of the sort.
Either you are trying to claim that productivity increases do not grow
at compounded rates or I really don't understand.

>
> Since our population growth rate is already at around 1%, if we increase
> the education and intelligence of the population more, the growth rate
> will fall even faster, possibly into the negative percentages much like
> european countries (which is why they can sustain lower economic growth
> rates and maintain standards of living with the US). As longevity
> increases in a population, the desire to reproduce decreases, thus
> reducing the necessary economic growth rates to maintain improving
> standards of living.

Ok, great. Lets assume that this trend continues and effectively
*eliminates* the desire for reproduction. It doesn't, in me at
least, eliminate the desire to know and understand more. In
that regard, I am currently biologically constrained and would
like to "break out". Giving me more space doesn't help much.
Giving me better technologies with which to engineer my mind
and more energy to run it does. Now we are back to J-Brains
(planet sized supercomputers vs. solar-system sized supercomputers).

>
> Comparing the behavior of non-intelligent species to the long term
> behavior of intelligent species if fallacious and fraudulent.
>
Why? -- though we may be intelligent we demonstrate all the behavior
patterns of a non-intelligent species -- strong mating urges, violence,
colonization of available ecological niches, etc.

The key phrase would seem to be "long term", the problem would be
in modifying those behavior patterns at the rate the singularity occurs
(which is weeks). If an "intelligent species" cannot prevent or
adapt to the singularity, then all generally acepted paradigms
for "normal" behavior become irrelevant.

It would seem that consensus driven political systems require
a significant delay in adapting to rapid change around them.
[I'll cite the record industry and MP3 as an example.]
If the change is rapid and you aren't prepared to deal
with it, then it is over before you can respond. Right
now, you have an opportunity to convince people that
building and uploading into an M-Brain is a very bad
idea. But once the construction starts (as it only takes
a matter of weeks to build) anything you plan to do would
be too late.

>
> And it is yet to be proven that some or all 'exponential growth paths'
> are not paths of the third order, maxing out at some plateau, the
> highest being that of light speed.

Exactly, a major reason to build M/J-brains is to get our communication
bandwidth to highly parallel light-speed (instead of sound or neuronal
speeds). And you do max out at the light-speed plateau -- that why I keep
arguing that evolution (as we typically think of it) stops there.

>
> Since greater intelligence and education result in lower population
> growth rates, its not hard to imagine a future where the population
> falls down to the levels of the previous hunter gatherer society (about
> 2 million per planet), yet each individual is of high intelligence, and
> engages in much intellectual interaction outside of the feral/agronomist
> lifestyle.

No disagreement. The transhumanists who are willing to embrace
evolutionary technologies survive and everyone else eventually
dies off. Self-selection into the environments with the best
opportunities wins.

>
> Moore's Law has yet to be shown to have no upper limit. Better yet,
> light speed itself puts a limit on maximum growth of computational
> technology. Making Moore's Law a Holy Mantra is an error of faith. Not
> very scientific.

True, though the the 5 atom gate thickness limit on CMOS is going
to be a difficult nut. We hit that circa 2012. If you want
really hard limits you have to go to the Bremermann/Bekinstein
bound. There are probably 3 steps --
  (a) Go to the limit of current devices
  (b) Go to the limit of single-electron-quantum-dot devices
  (c) Go to the limit of the B/B bound.

It is very unclear at this point how you would make the (b) - (c)
transition.

The point of my invoking Moore's law (and others) is that
economics drives evolution. In our "environment" if you
derive a better economic solution you get supported.
Faster computers, more variety, fancier VR environments
*will* get funding unless you find a way to change
human desires or economic systems.

>
> Your error is to assume that it will happen sooner rather than later.
> Much the same errors the early christians made in expecting the 2nd
> coming in their lifetimes. Sounds to much like a religious attitude to
> me.

I assure you it is not religious other than it results from the
fact of doing the calculations as to how fast asteroids and
planets can be disassembled. [Rapid disassembly gives you an
exponential increase in power harvesting capacity, which in
turn enables increased intelligence, which in turn enables
a more rapid design of optimal architectures ...]
If you can demonstrate that this cannot proceed quickly,
then you might be correct.

To understand this growth rate, I would suggest you review:
   - Doubling time of a nanoassembler: 5 sec. (Nanosystems, pg 407)
   - Doubling time for self-replicating nanomachines (e.g. bacteria):
       20 minutes (many medical texts, contact me if you want citations)
   - Doubling time for large scale capital stocks (< 10,000 sec = 3 hrs)
     (Nanosystems, pg 1); delivery of 1 kg product by < 1 kg manufacturing
     system (< 1 hr; Nanosystems, pgs 421-425).

These growth rates allow you to manipulate galactic masses within
a few weeks (if you have the material & energy locally available.

So, if we get on this path, growth to the SI level is rapid.
I would argue that the burden of proof falls on you to show we do
not get on this path, or if we do, we rapidly get off of it.
       
The trend line we are on now leads directly to this!
>
> >
> > If evolving to the limit of physics *is* feasible, and "life"
> > is designed to "evolve", can you make a case for the cessation
> > of evolution?
>
> Evolution is a result of pressures against the survival of the
> individual. Once the individual is practially immortal and intelligent
> enough to handle most eventualities in the physical world, they have no
> further need of evolution.

Exactly. So you have to make the case that they would
 (a) Convince themselves that evolution is no longer needed
and
 (b) "edit it out" of their personna(s).

I'm sorry but even when you cease reproduction, preacefully coexist
with the other few million planetary inhabitants, etc. you still have
the problem that:
  H-Brains do not have the capacities to prevent/avoid galactic
  catastrophies.
and
  M-Brains do.

Which survives the longest?
Which do you choose?

>
> Your points here illustrate the proof of my statement. You refuse to
> acknowledge that once some part of the population 'transcends' (quotes
> being to denote your mystical attitude toward that condition), that the
> remaining population will not automatically be transcended or be
> destroyed. That we have stone age cultures in coexistence with our own
> right now illustrates the fallacy of your argument. Your argument
> assumes a level of evil and callousness in the motives of transcended
> beings that I personally would take as an argument to stamp out all
> efforts to transcend.

It isn't a "mystical" disconnect so much as a logical/economic one.
The difference in recources is 10^13x+. If I walk up to you tomorrow
and say -- hey, I'm going to offer you 10x what you are getting paid
now to empty garbage -- Will you accept the offer?

Perhaps not. But as the saying goes (almost) everyone has their
price.

I agree entirely that there will be "takers" and decliners" of
the M-Brain "offer". My basic premise would be that in the
long term, the M-brain "takers" survive, while the "decliners"
die out. Simple, ruthlessly efficient, doesn't care one bit
what you "think" about it "natural selection".

For that not to occur you have to argue "conservation"
agendas for the M-Brains. I'm sorry, but I don't notice
you saying "I cannot take a walk in the forrest, for
I knoweth that I shall stepeth on the nematodes".
To win this argument, you have to argue an extraordinarily
higher "caring" for lower life forms by M-brains than we
ourselves demonstrate.

> You fallaciously assume that an AI will not develop a moral code, as if
> there is no objective morality. Sorry, null program.

I would argue that any AI would evolve a program for its self-preservation.
Whether it evolves any "morality", or one "consistent" with our own
remains a very open question.

As I discussed in another thread -- humans (currently) have unreliable
"morality". AIs might be programed with/evolve completely
reliable (trustrable) morality. Go ahead, make the argument
for the preservation of an untrustable morality over the
trustable morality...

>
> Read it. You still are evading the point. Just as there are stone age
> cultures alive and functioning today, 20th century cultures will survive
> into and beyond the time of any supposed date of 'singularity'. You need
> to stop looking at it as some sort of Day of Rapture that all will
> participate in or go to hell.

I've argued consistantly for the preservation of diversity
(even in the face of what may be miniscule ROIs).

SIs and planetary/tribal cultures are in little conflict with each
other (any more than the humans/insects are). The SI consciousness(es)
may well decide to preserve the history/background culture.

In the long run however, you have to argue that the SI is
*so concerned* that it will take an active role in the
preservation of the environment [we don't want our great
.... children to be roasted by the sun going into a red
giant phase, so lets stop it shall we....]

>
> If you do so and do not accept any objective morality of survival and
> coexistence, then I will be sure to nuke you before you attain your
> goal. Kapisch?
>
I, personally, do generally accept the morality/survival goals.
[Fundamentally do unto others as they would have them do unto you.]

So, by definition, I should do nothing to hinder your personal
survival. Ok, that makes absolute sense. Do you require that I
take a pro-active role in promoting your survival? How much of my
own resources am I required to dedicate to these promotional activities?

Regarding "nuking", thats fine. The problem that comes to mind is
that you have to nuke me, Bill Gates, Larry Ellison and a host of
other people who are used to "survival of the fittest" business
tactics where "the terminator wins". Can you honestly think or
believe that you will be successful in eliminating all of us?
The problem is that it is like a chess game, eliminate the queen
and the rook or the bishop assumes the position of the most valuable
piece. All you do by eliminating an individual is transfer the
power.

Robert



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:27 MST