Re: Living under water

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Wed Jun 27 2001 - 11:08:28 MDT


Wading into a debate with someone from Sweeden, I must have
rocks for brains... :-)

On Wed, 27 Jun 2001, Mikael Johansson wrote (in reply to me):

> I'm not sure I agree with this... A social relation would exist as
> soon as interaction between individuals occurs,

Is the intereaction between two "neurons" (or Calvin's hexagonal
pattern preservation neuronal "groups) "social"? Is an interaction
across many light years of 10^-45 of your knowledge base "social"?

In a greatly expanded definition of the term perhaps. However I
would confine the general understanding of the term "social"
as the stuff "sociologists" study -- i.e. exchanges of small
but significant fractions (perhaps 10^-6?) of your knowledge base
with individuals with somewhat overlapping knowledge bases.
I think in the future these will constitute a very small fraction
of the information exchanges because "individuals" as we know them
are likely to be a very small fraction of the total life in the galaxy.

An interesting discussion would involve what an "individual" really
is since you should be able to "pick up" and/or "replace" the
complete knowledge representation that constitutes an "individual".
Are you the container or what the container holds?

> and an economical relation as soon as this interaction calls
> upon either individual to /do/ anything at all -- or did I just
> misunderstand some basic concepts?

I'm not sure what "economics" looks like at the SI level.
The cost of transporting significant amounts of anything
(matter or information) across interstellar distances suggests
to me that it doesn't occur (i.e. interstellar trade is pointless
for the most part). Now within an SI you may have the concept
of "individuals" (single computational nodes or a tightly
coupled group) but once you have used all the matter in
the solar system it seems unclear what gets traded "within"
the SI. If you assume every "entity" gets an equal portion
of the matter & energy (the ultimate socialist state(!))
then it seems you only have economics within the context
of the present value of present-day and future-day thought
units (I'll give you 3 thought-units today for 4 tomorrow...).
Perhaps this supports an "economy" but it seems very different
to me from the one we have today.

> > - The concept of "a hermits life in isolation" is the
> > "ground of being" for advanced civilizations in the universe.
>
> Why?

Because sending a significant fraction of your knowledge base
(i.e. the type of exchanges that humans have) is extraordinarily
expensive when you have > 10^50 bits of knowledge. The relative
quantity of information exchange between advanced-SI civilizations is
probably comparable to that I currently have with some poor uneducated
farmer in Sri Lanka (e.g. some atoms from skin cells I shed
every day may eventually make it into his food supply).

Now, if you limit your growth to ~10^-10 of the computronium
available to an SI, you can probably retain some concept of
"individualism" and have societal interactions similar to those
you have today. However you presumably have the choice of
scaling both your "thought" capacity and your "communications"
capacity so you can live an existence ranging from a completely
non-communicating "hermit" to a "flow-through" architecture
where second by second or minute by minute a completely new
identity pattern is adopted. This would probably be the
ultimate end-point of existentialism.

> > - "psychological needs" are entirely a product of evolution --
> > once we have our "hands" on the dials the "conventional" needs
> > become irrelevant
>
> Why? It is not a self-evident development.

"Needs" exist to produce survival promoting behaviors. If identies
within an SI exist within a competitive "survival" based environment
(rather than a "rights" based environment) then I suspect a few
of the early adopters will get most of the cake (as one of Robin's
papers points out). It seems it comes to equilibrium with either
a single identity dominating the allocation and matter and energy (or
mutually agreed upon rights for remaining identities). In those
situations "survival" concerns as we know them cease to exist.
Similarly, if Eliezer's SysOp strategy is successful, then it
presumably eliminates the tooth-n-claw struggle for the survival
of individuals. So it seems to me that multiple paths lead to
the loss of "survival" concerns and therefore the loss of a need
for "conventional" psychological motivations.

So what happens then -- You can replace the human "fight"
or "flight" response with a much faster program that assesses the
degree of danger and chooses a specific response out of a much
broader set of actions that have been extensively simulated.
You can replace the "need" for social contact which most likely
grew out of the survival benefits of tribal societies with
a perfectly equivalent "contentment" program while entirely
alone. Etc. All of the psychological programs we now which
are loosely coupled with our wet biological systems get replaced
with much better modules to serve specific functions (*if*
they are still necessary).

> Why not (d) We get to change our own code, but choose to keep social
> and economical relations in one way or another since it would be the
> more efficient solution?

Anders has some interesting ideas regarding the use of economics
"within" an AI development framework to choose paths. Just
theorizing, I wonder if this leads to cognitive macroeconomics
and cognitive microeconomics... I'm not disallowing (d), I am
suggesting that it becomes a very small fraction of the phase
space. If the main point for our current code and interaction
methods was to survive and produce copies of our genes and we
get to the point where most of our existence is about the
survival and copying of "memes", then it is questionable whether
current-day "common" social & economical relations make sense.

For example -- is there a need for "friendship" for security or
"contracts" for some level of "trustability" in a social environment
of guaranteed rights and inherent "trustability" (because that is
the game-theoretic optimal solution).

>
> > [snip] In that situation all of this discussion of "society" & "economics",
> > "affection" and "approval" becomes pretty irrelevant.
>
> Will it really?
> Why should our becoming SIs abolish our wish to interact with each other?

Only if you constrain yourself to a self-architecture/identity
based largely on those that you are now familiar with will those
needs and interactions be important.

In thinking about this, I realize that part of the reason for my
perspective is having been trained as an electronics engineer,
a computer scientist, and molecular biologist -- all of these
involve thinking about "architectures" and "interconnections".
People on the list who don't view things from this perspective
(whose education's or profession's may involve greater levels of
more common human interaction modes) may see things quite differently
from my point of view. There is probably a strong predisposition for
individuals to lean toward that which they are most familiar with.

> [snip]
> it seems to me that a slightly concealed selfcontradiction is
> present...

I'm not seeing it right now unless its related to Greg's comments
that you have to wrestle with the period up to the singularity
to get to the point where you have the really-long-term perspective.

Robert



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:08:18 MST