From: Eugen Leitl (eugen@leitl.org)
Date: Tue May 14 2002 - 05:05:03 MDT
On Mon, 13 May 2002, Lee Corbin wrote:
> Thus it's *very* important that AIs be developed who have a built-in
> incentive to be reasonably nice to humans. Especially on scores like
"Reasonably nice" is not sufficient. Your specs (list of constraints) have
to be watertight, because most of growth trajectories going through
unchartered waters of many orders over orders of magnitude of complexity
are not friendly. You have to slay Darwin, too, and to keep him out, which
is even harder (Darwin is a zombie that way).
Singleton scenarios are unphysical. Those of them who're not catatonic
failures (the default for man-made seed designs) explode in Blight. Tiny
subset might hit metastability, but it's long-term stagation, and
engineering hubris in face of risks. Don't make it hard, make it soft.
Making the edge harder multiplies the risks. Here's the only mechanism
which really teotwawki this place for good.
> this, I hope that endeavors such as Eliezer's succeed grandly.
I don't see how they could succeed. Our best hope is to dull the edge, so
that more people can make it. If the edge is soft enough the ride might be
very smooth indeed.
> To stand back a sec, everything depends on how steep or extreme the S
> will be, of course. But the most thought-provoking assumptions point
Once beyond a certain threshold, all bidirectional flow of information
ceases, both because of the different timebase and loss of incentive on
the advanced player's part. So the gap will only grow, all the way until
the ceiling (given by limits of computational physics), which might be
quite soon by wallclock time.
> to an AI chewing on tons of material during its first minutes in an
> effort to develop further its own intelligence. It seems inescapable
If it's evolutionary, it will just feed and multiply. Make babies.
> that during these minutes matter very far removed (on the order of
> kilometers away) won't be relevant to its progress. (Recall that
You assume passive transport. Even so, airborne dust travels a lot. Active
transport could involve global dusting from hypersonic vehicles or fractal
clusterbombing via ballistic delivery. Initial growth might be slow, but
exponential processes with active transport tend to be lightning quick
after some setup time.
> light goes about a foot in a nanosecond, and a few kilometers will
> seem far away indeed to the AI.)
Not if the next being is cm away. Why do people always assume there's just
one individuum? It's a population, and radiating furiously while you watch
it.
> Thus I find inexplicable your remark about growth being handicapped
> by confinement to a planetary surface. So I'd say that even after
> the first minutes---during which the AI reaches unimaginable advancement
> ---any off planet raw materials would seem hopelessly distant for its
> use. (Of course, it will conquer all that matter by and by---it's just
> that the central few meters or kilometers will---perhaps always---be
> at the core of development.)
Why did the superchicken cross the galaxy? To breed on the other side.
> This leads to the picture that ten years ago I talked about with friends
> that I called "The Wind from Earth": until the top of a technological
> sigmoid is reached here, Earth technology will rule. (I'm sure that
> many have had the same idea.) Distant matter---as close as Neptune or
> as distant as galactic center---will never catch up, but will feel
> a technological gradient flowing from Earth. Life for them will be
> a constant struggle trying to determine just how subversive are the
> latest extremely advanced algorithms (patterns) sent from Earth.
I think what you would see is a succession of colonizing beings (control
of the hardware layer is too important to be given away), wave after wave
after the pioneers passed. Similiar to what you see after volcanic
eruptions.
> The same reasoning could apply to even within a few meters of the
> "hot spot" as you call it. Some people, like John Smart, are convinced
> that it might be worse than that, and that more and more extreme
> processing will occur on yet smaller scales. (I've never really
Femtotechnology? It's interesting that we might have first tentative
evidence of strangelet clumps (could be alive, for all that we know)
passing through Earth:
http://www.telegraph.co.uk/news/main.jhtml?xml=%2Fnews%2F2002%2F05%2F12%2Fwnugg12.xml
The stuff is certainly dense, if it is sufficiently stable and can be
structured at pico and femto-scale, and can operate at MK temperature
range it could be very useful for a number of processes, including
computation. Here's a recipe for a small-scale fusion plant too, and be it
an ICU, or a turbine.
> understood why it is supposed that physics allows unlimited
> processing at sufficiently small scales. We hardly have any
> evidence for that, do we?)
Not really. Unless the strangelet thing gets verified. Notice that you
might need grand scale constructions to fabricate strangelet matter.
If it's there, there must be a lot of it, but most clumps will be too
large, and you have to catch and process the smaller bits.
> But what I'm talking about could happen: gobs of matter hundreds
> or thousands of meters from the center might also have to struggle
> to keep their identities from being obliterated. (At this point
> some will invariably ask, "Why? What motivates them?" The answer
> is *evolution*, of course, which is the answer to almost everything.
> Fluctuations that are able to resist identity-destroying changes
> by definition are those that survive.)
I always wonder why people propose outlandish motivations for postbiology,
while the good old "making babies" seems sufficient.
> Lee
>
> P.S. Lee Corbin intends to remain a fluctuation that resists
> identity destroying changes! I hope that all of you survive too.
We'll all live forever, or die trying.
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:14:04 MST