From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sun Jul 11 1999 - 14:42:00 MDT
> GBurch1@aol.com wrote>
>
> But the point is that some small number of humans DO go to the trouble and
> endure the "boredom" of the content of the conversation for the simple sake
> of learning a little bit.
The key phrase "learning a little bit." I would say that they would
already know anything we had to teach them (other than those
possibilities related to increased diversity and/or chaos derived
inventions previously mentioned).
We are still unraveling scientific laws, they probably know them all.
We are still doing our planetary inventory & disassembling
self-replicating machines, they would have had a simple sub-agent
do that long ago.
Puts what they can learn from us on a really short list.
>
> You made the chillingly apt point when we discussed this in Palo Alto
> that many, > if not most SIs might spend the vast majority of their
> time in "suspend mode" out of what amounted to a god-like boredom.
Well, they do have a lot of TV to watch (200 billion stars and/or
planetary systems) and they can carry on a lot of conversation
over very high bandwidth links with the other SIs (perhaps 400 billion).
My gut feel says that based on energy conservation concerns & light-speed
restrictions, you segment the galaxy into autonomus regions "managed"
by the nearest SI. Those SIs interact primarily with their nearest
neighbors.
> But I can also imagine a situation in which an SI might
> create various subparts of itself with lesser levels of intelligence, perhaps
> a myriad of such subdivisions, each with slightly different capacities for
> interest and engagement in various subjects. Managing this hierarchy of
> sub-intelligences might be the ultimate business of the ultimate SIs.
Sure, that how our minds work. This is what Minsky's Society of Mind is about.
>
> Perhaps, with cosmic time and ultimate computational resources on their
> hands, at least some SIs will come to play the Brahma "god game".
Yep, we may be an experiment. They certainly have the resources to do it.
> If so, then there will be perhaps many, many quite potent intelligences in the
> universe -- intelligent far beyond our current level -- but still not at any
> particular time fully in possession of all of the knowledge and power of a
> "complete", unitary SI.
SIs have limits too. As your mass increases, so to do your power requirements
for doing any navigational course corrections. As you age presumably you
have a memory storage problem -- where do you store the galactic
history of 200 billion stars for 5 billion years? Since you aren't
space limited, and you probably don't mind long recall times, you
probably construct large data storage cubes. Obviously an SI on
one side of the galaxy, can't quickly access the stored data of
an SI on the other side of the galaxy. Your "retreival time" on
a particular piece of data may be 100,000 years. So yes, I agree
with your statement.
> ... discusion of sub-SIs akin to Krishna/Shiva possibly talking to us ....
>
> While you make compelling arguments about the improbability of communication
> between SIs and animals with our level of intelligence, can you say that the
> scenario I describe above is impossible or even improbable?
>
Certainly not impossible, particularly if we are an experiment.
Ok, here we have 2 identical stars, 2 identical solar systems,
2 identical seed packages of organic molecules. In both cases
we adjust as necessary to produce 2 identical sentient species
(with chaos theory limits). Now on this planet we "talk"
to them and with the other planet we don't... I'm just not
sure whether we are the planet where they chose to talk to
us or not talk to us... :-)
The critical question to my mind is whether there are
"evolutionary plateaus" or "ecological niches" that are likely to
be filled by sub-SIs. If it is *sentient* and it can self-evolve,
then we are back to the Robots discussion (of another thread) --
why would it not choose not to evolve as far as it possibly can
(and gobble up another star...)? Unless the Supreme SI(s) punish
self-evolving behavior rather severely it would seem that it must occur.
Either that or Supreme SI(s) can create highly intelligent "robots"
with degrees of creative freedom in specific areas but no possibility
of self-evolving into something different. I would propose we call
this a SI-Mule (able to do a job, semi-intelligent/sentient [relative
to a SI] and infertile). [According to the British Mule Society, mules
can in some cases reproduce so the analogy isn't perfect.]
Would a SI-Mule want to talk to us? I haven't a clue.
Some criteria involved might include:
(a) How far up or down the intelligence scale it is;
(b) What its curiosity "setting" is;
(c) What its fundamental design purpose is.
If its a solar-engineering agent or a terraforming agent,
{ after all to get statistical significance on the SIs
talk/don't talk to humans experiment, you have to have
way more than 2 trials... :-) },
then I don't think it talks to us. If it is the
messenger of the regional-SupremeSI controller, then
it most certainly would.
However, this discussion *is* proving useful. If the
talks/doesn't talk experiment *is* being conducted,
then we have a really good explanation for the Alien
Abductions! The chaos on Earth129 causes a hurricane that
kills one Joe Smith. Calculations show that this will
significantly disrupt the similarity between the Earths.
Solution - send a ship down to Earth123, pick up the
Joe Smith there, create a nano-clone (that doesn't
realize its a nano-clone since its an exact replica)
and reinstall Joe Smith on Earth129.
This morning I couldn't find my shampoo in the shower.
It turned out to be by the sink. I have absolutely
no recollection of moving my shampoo from the shower
to the sink. [This is an absolutely true experience,
I'm not making this up for the purpose of the discussion!]
So now I have an explanation. Last night I got copied from
Earth122 to Earth733. Great. Now I know I'm not just
getting old and losing my memory.
We really need to know if some of this nano-stuff is doable.
Speculating like this is going to be a real waste of time
if its not. :-)
Robert
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:27 MST