From: Stirling Westrup (sti@cam.org)
Date: Sun Mar 19 2000 - 22:35:28 MST
Robert J. Bradbury wrote:
> > 1) Re: Microlensing & dark matter
> >
> > A partial answer can be found at http://www.cfht.hawaii.edu/News/Lensing/
> > [snip]
>
> This is large-scale dark matter, while I was more interested in the
> microlensing observations from the MACHO, OGLE, PLANET groups et al. The
> best explanation for their data is ~400 billion 0.3-0.5 M_sun objects
> orbiting our galaxy. But one of the many problems with this work is that
> they can only look in very specific directions (nearby galaxies), so
> extrapolations are *iffy*. The Hubble north and south deep fields suffer
> from the same problem.
I hadn't heard *anything* about these folks before, and had simply assumed
that dark matter surveys of our own galaxy were impractical. Darn, now I
have something else to add to my reading lists.
> If either of these is accurate, then
> some galaxies will "go dark" relatively quickly. If star lifting is
> possible, then calculations by Robert Freitas show that you could consume
> all of the fuel in the galaxy in relatively short order, probably leaving
> behind burned out iron memory banks, some black holes and neutron stars.
> Then they might colonize & consume neighboring galaxies that go dark, etc.
> The dark regions with mass are where life got lucky and got an early start
> or is particularly aggressive in its energy use or colonization principles.
A nifty idea, but it doesn't seem to match the data. If your scenario were
true I would expect the visible (unconsumed) galaxies to be separated some
distance from the dark matter. As is, visible galaxies seem to exist at
the juncture points of the dark matter filaments.
As an aside, I once wrote a scenario (called 'Artifact') for a role-
playing game in which I assumed that the dark matter was a single *large*
structure that had been built in hyperspace by powers unknown. The
player's job was to try and figure out both how and why.
> > > 2 & 3) Re: Self-replication & evolution speed-limits...
> >
> > Are these questions about the absolute limits under any circumstance (at
> > which point I would say it looks to be around 1E-30 to 1E-40 seconds for
> > both questions), or is it a question about what parameters are critical
> > to setting a particular speed limit under particular circumstances. This
> > is a far more complex, and IMHO a far more interesting question.
>
> They are questions about the minimum rates at which stuff can be
> built or evolve. Since Tom McKendree and JoSH have pointed out that the
> fastest way to build stuff is to build ~0.6 the mass of your final product
> as assemblers, then have the assemblers build the final product
> (disassembling themselves as necessary), the question becomes how fast you
> can build the mass of assemblers.
Okay, then it seems that you are asking the second kind of question, where
the context of the replication/evolution is critical in order to derive an
answer. Your (partial) list of concerns:
> Chemical reaction rates? Tip movement time?
> Delivery of material to the tip? Heat removal? What makes the fastest
> material? Do TiC "biochemistries" do better because it can tolerate higher
> operating temperatures (i.e. less volume and energy needs to be devoted to
> heat removal)?
all seem to make strong assumptions about the physical and chemical makeup
of the assemblers being used, not to mention particular engineering
principals. I would expect that any rates calculated on the design of
first generation assemblers to go out the window for the second or third
generation.
For example, you seem to be imagining a homogenous soup of one type of
assembler doing all of the work. They are provided with materials and
chemical compounds and they build new assemblers atom by atom. This seems
likely as a first version approach.
A more mature technology (pulling a random idea out of the air) might
involve a very different sort of assembler that floats in some kind of
fractionated electrorheological fluid. In such a fluid the nano-, meso-
and bulk structures would form in reaction to tiny patterns of electrical
activity. The "assemblers" would be intricate energy modulators that would
attempt to insure that electrons in the neighboring fluid were moving in
the desired patterns to produce the needed local structure.
Could we make such a liquid? I dunno. I'm just trying to point out that
the ultimate limits to how fast things can reproduce or be formed into the
right shapes stronly depend on the underlying technology.
> With evolution its a similar can of worms.
Except that I'm not sure its as well-defined a problem. If a 'unit' of
evolution is the creation of a new useful mutation, then all you need is
enough self-replicators randomly mutating themselves and you get one
improvement per generation. (Ignoring local maxima on the fitness plane,
and other hairy complications.) This doesn't seem a terribly *useful*
definition of evolution. If you start trying to make it more realistic by
considering rates of spread of a mutation through a species, and the
possibility of a changing environment, and the murk rapidly gets thicker.
> Xenology by Robert Freitas lays
> a foundation for some of the various "natural" biochemistries that could
> exist in the universe. Does life develop in any of them? Does it develop
> faster or slower? How dependent is the rate of evolution on the available
> mass? Operating temperature? Mutation sources? Global-scale stresses
> (extinction events)? Are there artificial biochemistries in which
> evolution can proceed more quickly?
Again, without knowing exactly how you are measuring 'evolution', I can't
even figure out which of the above questions are meaningful, and which are
not.
> > > 4) Re: number of useful "biochemistries"
>
> I wasn't so much interested in Carbon/Water biochemistries (there are
> clearly lots of those) as Carbon/Ammonia or Carbon/H2S or systems
> based Si or TiC or InGaAs. You have 92 natural elements, how many ways can
> they be put together to produce self-replicating systems that can "host"
> intelligent consciousness? For extra credit determine which chemistries
> are optimal for the architectures in Question 9...
So, I would say the number is very, very, very large indeed. If you allow
a definition of life from complexity theory, then its possible to devise
*political voting systems* that are alive.
>
> > > 6) Re: Breakout of amoral alife...
> >
> > About the same probability that a chemist mixing amino acids in a lab
> > will accidentally create a plague-form capable of wiping out our species.
> >
> Not clear. There are a few labs doing life creation and
> evolution of enzymes experiments and those experiments presumably
> work with moderately large number of molecules. But, there are a lot
> of bits flying around and that high rate of growth means that the
> quantity of information being played with by computers exceeds the
> quantity being played with in chemistry experiments at some point.
> Would an AmoralAlife@Home program find the "dark"-ring before the chemists
> do?
The problem is that if your typical chemical frankenbug escaped into the
wild then the current denizens, having evolved in a tooth-and-nail
environment would simply eat it. By the time we have a computer and
communication infrastructure sophisticated enough to host programs to
which I would be willing to grant the title of 'alive' to, it will be just
as hostile to digital frankenbugs.
As far as I can see, one absolute requirement for the spontaneous creation
of life is a sterile environment for it to happen in, so that it can
survive long enough to adapt, and I can't see that happening *by
accident*.
> > Note that this is a vastly lower probability than the chance that a
> > deliberately constructed, amoral, self-evolving, self-replicating
> > AI/Alife will "breakout" by accident.
>
> We have examples of those today with computer viruses. Fortunately
> there aren't any nanoassemblers for them to take over and the people
> doing the ALife work are pretty responsible and we seem to be getting
> better at producing firewalls and speeding up the rate at which
> software loophole fixes are applied (for example there are bots now that
> crawl the net warning people about security holes).
Sure, but I'm not sure I'm willing to call a computer virus 'alive' in any
meaningful sense. Then again, I'm somewhat iffy on the question of how
alive a biological virus is. As to the Alife researchers, I'm one myself
on occasion, and there simply isn't a need to take precautions. One of my
little creations could no more survive in my computer without its
environmental support structure, than a typical computer virus could
survive in my stapler.
-- Stirling Westrup | Use of the Internet by this poster sti@cam.org | is not to be construed as a tacit | endorsement of Western Technological | Civilization or its appurtenances.
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:27:32 MST