From: Robin Hanson (rhanson@gmu.edu)
Date: Tue Nov 23 1999 - 08:28:43 MST
Robert Bradbury wrote:
> > Let me paraphrase to see if I understand you. You seem to be saying
> > that creatures at one star care only about the energy and metals at
> > that star.
>
>This comment seems to make clear what part of the problem is. I'm not
>sure if in any of the discussions, I have clearly stated my assumptions.
>It may be unclear to others the problem I'm trying to solve. Since
>Extro3, I've been wrestling with the sole question of: ...
> "When does the cost of reducing your hazard function exceed the
> benefit in increased longevity derived from such a reduction?"
>... three things that allow longevity: ... hazard avoidance ...
>exoskeletons or flight ... size ... intelligence ... At some point
>additional ... intelligence ... will fail to produce corresponding
>reductions in the hazard ... you stop "growing".
>In my framework, any exploring by intelligent sub-entitites or
>division of "self", is likely to *increase* your hazard function
>and is an undesirable course. The intelligent sub-entitites ...
>are (a) physically smaller and (b) less-intelligent. ... historical ...
>reasons for migration or colonization ... are probably irrelevant.
>The prime motivator of behavior revolves around the minimization
>of the hazard function ... Now at this point, *if* you have
>effectively solved your personal survival (longevity) problem ...,
>the question that I have *not* solved is: What do you optimize?
>or What do SIs value? Intelligence? Memory? Information? History?
>Creativity? Beauty? Art? ... Without those answers, determining
>the logical structure of single SIs or SI cultures may be difficult.
It seems that you assume that:
Virtually all advanced creatures in the universe care essentially
only about their *individual* longevity.
And though you have not said so explicitly, you seem to have "fast
high-communication computation" concept of the individual. That is,
you seem to preclude a creature who's concept of itself includes
things, like computers around distant stars, that can't contribute
to computations which must be done within a short time and require
high levels of communication between parts. After all, a creature
focused on the longevity of its clan or species might act very
differently.
You seem to want to allow these creatures to place a small value on
things like art, but if you allow that, I can't see why you don't also
allow them to place a small value on colonization, and then we have
to explain the lack of colonization again. Similarly, you have to
assume that virtually all creatures have this singular focus on
"individual" longevity, to explain the lack of colonization.
I accept that the ultimate test of your assumption is empirical,
and so we should consider whatever empirical evidence you offer.
But if you think there are theoretical reasons for us to think
your assumption plausible, you need to clarify them. All
creatures valuing only longevity is not implied by these
creatures being conscious, nor is it implied by an evolutionary
selection of creatures.
The only plausible scenario I can imagine for producing this
situation is if a single power arises early in the universe and
quickly fills the universe without actually colonizing it much,
and actively destroys any creatures with other values.
Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
Asst. Prof. Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030
703-993-2326 FAX: 703-993-2323
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:50 MST