Re: seti@home is SORTA WORKING

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sun Jul 11 1999 - 17:21:00 MDT


> hal@finney.org wrote>
>
> I think this speculation is every bit as valid as the less imaginative
> ones where we "know" what SIs would do. I really don't see how we can
> claim to predict their actions!

Can we use "natural laws" to "attribute" behaviors to SIs:
 - We will presume they have a "survival instinct", so
   (a) SIs use the minimal resources necessary to anticipate hazards
       and avoid them. In extreme cases they might act to eliminate
       them. But even SIs have limits -- it would seem the can't
       make "black holes" go away; it may even be difficult to move them.

       My understanding is that gravitational systems are not
       predictable in the long term (this is called the N-body problem
       in astronomy). As the SI has perhaps a 600-billion-body problem
       to solve at least 100 billion years into the future, where some
       of the bodies may not follow "natural laws" (i.e. they are SIs
       that decide to change course for independent reasons), this
       could very well take a significant amount of computation.

   [I extrapolate from this to assume that they tend to migrate away
    from the centers of galaxies which contain massive black holes
    and away from regions of high stellar density (which collide)
    or high gas density (that form massive stars and go Supernova.
    The lower the nearby object density is, the more likely you are
    to come up with minimal energy costs to position yourself to avoid
    hazards.]

 - They want to get as much benefit for the lowest cost (simple economics).
   Since a signficant "benefit" SIs have is "more thought" or "more efficient
   thought" (more thought/watt consumed) or "faster thought" (more
   thought/second), presumably the devote a great deal of time to
   considering "What is the optimal way to solve problem X"? Where
   problem X is one that is probably beyond our imagination.

   For you hardcore theorists, problem X's are those that probably
   require from 10^60 to 10^80 instructions (roughly 1 year to
   a lifetime of M-Brain thought, depending on the size of the star
   powering the M-Brain).

 - Returning to "survival", a significant problem is the death of
   the universe. Now Dyson & Tipler have provided arguments for
   true "immortality" in open and closed universes, respectively,
   but there are significant "costs" in their scenarios. So
   there is a distinct advantage to an M-Brain that can figure
   out how to create another universe (if we believe in multi-verses)
   and tunnel itself into that universe.

 - If an SI largely consists of uploaded "individuals" and their
   copies then it may spend a significant majority of its time
   running nothing more than ever increasingly diverse VR simulations
   to keep its immortal uploads entertained. The hazard avoidance
   program -- thats a small subroutine -- most of the computations
   do nothing but create new realities for the entertainment of
   the "inhabitants". Presumably in this case the "will" of the SI
   is determined by some political system/government of the SI.

> Here's another explanation of our astronomical observations. Maybe the
> entire solar system is surrounded by a giant TV screen. On the other
> side, the SIs live and project whatever they want for us to see.

Of course. And all of these discussion threads that I've started
and you've contributed to are nothing more than a VR simulation
my intentionally divorced & noncommunicative entertaiment director
subroutine has arranged to allow me to have the perception that
"some people just don't get it...". :-)

Just what part of my VR entertainment program is it that you are
unwilling to accept? I'll email the entertainment director and
then we can all be happy.

On second thought ... "Open Dialog Box", "Create Subroutine -
Adjust-Interactor-Personality-Traits(include-slider-bars-for
emotional-settings)... "Close Dialog Box" [Side note to
self-auto-promptor -- remind me to build a new interface that doesn't
require these annoying dialog boxes someday]

Open/User/Hal/Adjust-Interactor-Personality-Traits
   --> "Disagreement level" - slide down to level 2,
   --> "Don't worry, Be happy" - slide up to 8

But I really think I'm missing something... Lets see...
   --> "Differentiation between technology & magic" -- increase to 10

Close/Resume Program

Feel better now?

>
> Once you accept the notion that the universe as we see it is fundamentally
> structured by intelligence, how can we reject these absurdities?
> If everything we see might be an artifact, then what is the role of
> science and reasoning on the basis of natural laws?

I don't believe it is "fundamentally structured" by intelligence
(though others, Dyson included, have made the observation that the
fundamental constants in the universe are "rigged" to promote life.

I believe that the Universe & Intelligence are co-evolving.
As I mentioned in another thread, if you go back to the
beginning of the universe it is "dead" (no intelligence), but
the current (old/nearby) universe (within a few million or several
billion) light years, is "alive". "Natural laws" apply concretely
to both the "young" (distant) universe and the "old" (nearby) universe.
The only difference is that in the "old" universe nothing is manipulated
by "conscious" (transhumanist?) agendas. You still can't operate by
"magic" that transends natural laws (i.e. faster-than-light communication).

[I know that some people "think" this is possible, but our currently
known and tested physics says "very doubtful".]

>
> We have faced many puzzles throughout the scientific era which seemed
> intractable in their time. Nothing we see today in the universe is
> inherently more difficult to understand. We have come up with solutions
> in the past based firmly on natural law, without having to posit divine
> intervention. There is every reason to expect that we will continue to
> do so in the future.
>

I believe I am presenting something that you might consider to be a
natural law.

In a rough form it goes something like:
  (a) Self-replicating nanomachinery randomly occurs
  (b) Self-replicating nanomachinery develops a program for
      self-preservation with occasional variants
  (c) Self-replicating nanomachinery "evolves" a mechanism for
      storing and utilizing "information" [software] (i.e. decisions
      based on learned experience) develops and trumps hardware
  (d) Intelligent software (IntlgtSft) develops (really trumping hardware)
  (e) IntlgtSft "comprehends" the rules of the game (and the limits).
  (f) IntlgtSft takes us to those limits.

You may argue probabilities at individual steps, but can you
make a case that this is not what is occuring in the universe?
It seems compatible with your "natural laws".

The *only* claim I am making is based on the fact that our
"observations" would indicate that the universe is 10+
billion years old, while we are < 5 billion years old.
We are at level (e) approximately. What stops us (or
other much older species from going to (f)?)

Instead of proposing that (f) would present a problem to our
logical understanding of the universe (upsetting our historic
transition from "magical explanations" to "scientific explanations",
can you present an argument that (f) *DOES NOT OCCUR*?

Mike I believe has tried to argue, "*No*, I won't let you go
there...". I will acknowledge that the luddites might be successful
at an (f)-block in some instances -- can you make a case that they
would be successful in millions or billions of efforts?

Open/User/Hal/ComputationUnitParameters
  --> CPU-Cooling-Speed -- increase to level 10
Close/User/Hal

Many :-)
Robert



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:04:27 MST