Re: MEDIA: NOVA on Gamma Ray Bursters

From: Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Date: Thu Jan 10 2002 - 14:31:06 MST


On Thu, 10 Jan 2002, Robert J. Bradbury wrote:

> Sigh!

Hehehehe.

> An advanced intelligence is only going to create an agent to do the
> exploring *if* it can guarantee that that agent will never decide to
> consume the resources of the creator. Can it do that? Only if it

An advanced intelligence can't help creating agents. It's not a monode,
it's a big messy thing. It's no more in control than we are, and perhaps a
lot less, given the colossal canvass, and much larger population size.

> No assumptions are required if you assume the singularity rapidly
> drives them to optimality.

Unless postsingularity strips darwinian regime it's still subject to the
same constraints.

> I'm going to repeat this every time this discussion comes up.
>
> WE DO *NOT* HAVE THE TECHNOLOGY TO DETECT THE UNIVERSE
> IS NOT IN LARGE PART ALREADY ENGINEERED.

I don't have to explain nonobservable phenomena. Given that even very
modest engineering would be apparent from watching the skies with
uninstrumented eye I don't think I have to explain anything.

> The faulty assumption is that if the engineering could be done,
> everything would appear "engineered" (to us). *We* can engineer, yet
> there are parts of the Earth and solar system that appear very

The Earth looks very engineered even from a great distance. Solar system,
yes, but because we're not a space-faring species yet. In several decades
the local system will probably become rather obviously engineered.

> un-engineered -- whether by intent or simply the fact that we haven't
> gotten around to engineering them because they is no economic benefit
> (or we may lack the matter & energy resources) to engineer them at
> this point.

The reason we're not doing it is because we can't. We can't make
self-replicating machinery yet. Even partial closures gives us major
indigestion.

> One can make a strong assertion that in the beginning there
> was no engineering. One may make an assertion that in
> the end (after all the protons decay, if they do so)
> there will be no surviving engineering. But given that

We don't know enough physics to tell definitely yet.

> we and our engineering exist -- the simplest assumption
> is that we are someplace in the middle of the development
> of engineering the universe.

Anthropic effect says we cannot derive any population data from the mere
fact of us observing ourselves (try stating the same question a mere
century ago). Any evidence seems to imply we're the brand new kids on the
brand new block.

> The historic arguments have been that "seeds" are cheap
> or "independent actors" will explore and colonize.
> But you have to get this (I've thought about it a
> lot) -- once you know you have trumped the hazard
> function of the galaxy -- "seeds" and "independent
> actors" are significant threats to your long-term
> survival!

Er, as I see it, after the Omega plateau is reached (which can be rather
soon, in terms of human years) the residual fitness function fluctuation
is low-amplitude and random.

> *So*, one may become an essentialy existentialist actor --
> "I might as well live today, because sooner or later
> my children, agents, seeds, etc. will eliminate me."
> *or* one says that I am not going to produce anything
> that may compete with me for future resources. (Thus

I don't have your scruples, and hence drive you to extinction, assumign
you're pre-Omega plateau.

> one only creates sub-minds running on your own hardware
> and one deletes these programs once they have outlived
> their usefullness.)

I think we'll see plenty of predatory activity in physical space, too.

> What is the most "trustable" agent? One that you can
> squash like a bug if it threatens you in any way.

YOU'RE NOT IN CONTROL. What makes you think your successors or you-derived
systems are ever going to have more control than you? On which evidence?

> As recent events have shown -- if you give an agent
> a certain amount of power, then ignore it, it may
> grow to a level that it can inflict significant harm
> upon oneself. In which case you have to apply significant
> resources to eliminating such agents and may have relatively
> little confidence that you have been successful in doing so.
> I do not believe that is the way that "advanced" civiliztions
> (or more probably SIs) will behave.

For certain, we're not talking about people here. So all bets are off.
But, in absence of further information when gazing into the crystal ball
you must apply extremely basic assumptions about constraints such agents
are subject to. Given evolution, and absence of mechanisms of how to strip
off the Darwinian regime I don't see how you can propose them following
the course of action as you've described.

It's not impossible, but you'll need a lot more data to convince us.

-- Eugen* Leitl leitl
______________________________________________________________
ICBMTO: N48 04'14.8'' E11 36'41.2'' http://www.leitl.org
57F9CFD3: ED90 0433 EB74 E4A9 537F CFF5 86E7 629B 57F9 CFD3



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:11:34 MST