Re: Uploading

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sun Mar 10 2002 - 01:54:01 MST


Richard,

I'll offer this to you and perhaps several other list members.
See if you can expand your ExI list contributions to include
a 24 hour integration window. I've done this to some extent
in the past (responding to multiple posters with 1 message).
Given the distribution of the list across Europe, the Americas
and Asia/Pacific, and the delays that there may be in delivery
one can contribute to list density if one allows time for the
global rotation of comments and subsequent integration by oneself.

[It would be a different and quite interesting extropian thread
to discuss how to train ourselves for responding in the most
extropic fashion -- i.e. learning to differentiate quick
responses that may correct errors in fact, to integrated
responses that merge lines of thought, to deep responses
(papers?) that attempt to extend the knowledge boundaries.]

If we cannot retrain *ourselves* than humanity as a whole is lost.

On Sat, 9 Mar 2002, Richard Steven Hack wrote:

[I'm attempting to condense -- without comments I think
may be mostly editorial.]

> [snip] I see no reason to worry about physical limits at this
> time, thank you.

Yes and no. I agree that there may be a "window" of paradise.
It will only exist however if the phenomena we currently see
(that in countries where affluence increases, reproductive rate
declines to ~= or < replacement population rates). For you to
not worry about the "physical limits" you have to make some
non-trivial assertions about the limits of arbitrary copying.

I'm not saying that copying limits might not develop. I'd just
be more comfortable if we had well defined reasons as to why
they are "likely" to develop. [The anti-self-competition
rules may be one such reason.]

> Ah, yes, Robert Ettinger's hypothesis, I assume.

I'm unaware of Ettinger's hypothesis, I'll add it to the list
of things about which I should be educated.

> If you exist at a single point in space/time, you can't be immortal;
> if you are distributed, you can be.

It comes down to a question of the spatial/temporal distribution
of events that can significantly disrupt the organization of matter
within a reagion or time in the universe.

> I'm not convinced. Worse, I have a metaphysical problem with the notion of
> "distribution" - that I do not know of any technology to accomplish it AND
> preserve the identity and continuity of the entity in question.

The "technology" to accomplish it is relatively easy. Take every neuron
in your brain, spread them out over the volume of the solar system and
tie them togeather with laser interconnects for the synaptic signals.
You might have to alter the rate of internal signals (to match the
external time delays), but there is nothing really "magical" about
this distribution process. In fact, I would presume that one could
compute the "real time" distribution distance where ones brain could
function at normal speeds replacing the time for diffusion of
neurotransmitters across synaptic junctions with the speed-of-light
transmission of equivalent information content. Move the neurons
farther apart than that and you become slower than current "real time".

> As noted above, the concept of "distribution" seems to be hand-waving since
> we know of no technology (at this point) to do it and as noted above
> preserve the identity and continuity of the entity involved.

I don't like to hand-wave so I generally postulate real technologies
based on current laws of physics. I've proposed a "distribution"
method (above). One could additionally add gentic modifications
such as less efficient sodium/potassium pumps so that neurons fire
more slowly to match the increased propagation delays for synapses
that are distributed over larger distances.

I will freely admit that it isn't clear how much of our thought,
creativity, experience, etc. may be wrapped up in specific temporal
encoding rates in our current neuronal architecture. This is something
that requires much more exploration. How well this can be "distributed"
remains an open question (and one that requires significant investigation).

> "Uploading" as it has been described to me does not meet my criteria.

The problem may be with the lack of specificity in descriptions.
For me "evolved uploading" is the correct path for existing humans.
One develops increased interconnectivity between ones brain and
external wet/dryware (think moving the cell phone inside your ear/
voice box and increase its bandwidth).

Once one has developed links between ones mind and ones external agents,
then distributing them becomes a question of computational efficiency.

> Ah, I may miss the point. Why would the distributed entities produce the
> least novel info?

The critical essence in computing is to account for the cost of propagation
delays. The longer it takes you to transmit a volume of information
the longer it takes to produce novel information based on that earlier
information base. The densest computing entities have the lowest propagation
delays (and the highest hazard functions) and so produce the greatest amount
of novel information per unit of real time. Those who opt for increased
survival may have to pay the price of decreased marketability.

Think of it this way. Do you expand your neurons over the volume of
a 1 cm^3 sugar cube, ~1000 cm^3 (~current human brain dimensions),
the volume of a city, the volume of a planet or the volume of a solar
system? You have to offset that with costs of such an expansion
(in novel information produced per unit time) with the benefits
(in terms of increased probability of survival of sufficient
quantities of ones computronium to retain ones identity).

> Mike Lorrey seems to believe the opposite - that you
> distribute yourself to increase your rate of experience. You seem to be
> saying the opposite.

Mike is correct. That dispersal of the experiencing entities
will increase the quantity and bandwidth of experience accumulation.
In that situation the loss of a single "experiencing" agent probably
isn't very significant. I'm looking at the survival of the entity
responsible for integrating the experiences. If the integrator is
concentrated in one place it is vulnerable. If it is distributed
(under current physical laws) it has contraints on its integration
rate.

If you have two experiences that dictate entirely different behaviors
in a specific situation then conflict resolution is a critical aspect
of "intelligence". How rapidly that conflict resolution can take place
(and how long it might survive) depends on whether it exists in a
1 cm^3 nanocomputer or neurons (or neuron equivalents) distributed over
the volume of the solar system.

> But I still see no evidence - other than your notion of distribution to
> avoid space/time catastrophe - that replication would be considered
> desirable by a posthuman entity.

I think a critical question becomes whether one desires "replicas"
or "backups". That begs the issue of how one perceives the preservation
of knowledge states identical to ones own.

The "replication" issue goes back to Eugene's and Mike's comments.
If one allows replicas or copies that "execute" in "reality", then
one allows the exploration of divergent paths. Your meta-mind can
then view the paths and select the one most desirable. One prunes
the other paths. I think this is what Eugene is calling the Darwinian
perspective and Mike would call "experience integration" and Eliezer
would call moderately or inherently evil (since one has to suspend
or terminate the execution of sentient beings).

> The "society" notion advanced by others
> - that posthumans would replicate because this would increase the utility
> of all - may or may not be true. We don't know what Transhuman
> intelligence will be like - they may desire or need society; they may not.

There is a reasonable argument to promote the increase of independent
actors if they really explore the phase space. This is I think Eugene's
point. If propagation delays for a minimal hazard function entity
prevent it from developing significant improvemtns in the extropic
vectors, then it is reasonable to promote the development of maximal
hazard function entities to explore the phase space (Think whales
and insects [or bacteria]).

> On the other hand, it could well be that Darwinian competition continues IF
> in fact there are hard limits and the entities come up against them in a
> reasonably short time (by their lights). If that is the case, then the
> Highlander motto, "There can be only One", may well prove true.

It may be. But I don't think its that harsh. One may survive perfectly
well as an "agent" within a hierarchy (just as one largely does today).
The "one" may be a meta-mind, while many or most of us are elements in the play.

Robert



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 09:12:54 MST