From: hal@rain.org
Date: Mon Mar 01 1999 - 17:01:38 MST
Billy Brown, <bbrown@conemsco.com>, writes:
> hal@rain.org wrote:
> > This architecture extends the power of the posthuman mind without
> > requiring the costly communications and coordination infrastructure that
> > would be necessary to bind all parts of the posthuman mind as
> > tightly as our own mental structure. The posthuman manipulates the
> > world through a small army of agents, all part of it in some sense, all
> > controlled by it, but at least some of the time working independently.
>
> "Costly" in what sense? I would expect both the hardware and software to be
> inexpensive by the time such things are possible.
Costly in terms of bandwidth allocation; in terms of transmission power;
in terms of space for antennas and transmission-reception equipment within
the brain. Electromagnetic radiation appears to be the best candidate for
such communication, and high data rates would be required to integrate a
mind as tightly as the separate parts of our own brains. The various body
parts could be miles apart, separated by massive or metallic objects,
or underground, all of which would make EM communication difficult.
Tightly integrating remote minds looks like a costly proposition.
> The big reason I see for distributed consciousness is light speed delay.
> You could easily have parts of yourself scattered across a good fraction of
> the solar system, and each component needs to be able to deal with events in
> real time. The elegant solution is a system in which components in close
> proximity merge into a single consciousness, while those that are isolated
> by distance can function as independent people.
Or there could be intermediate states between these two extremes.
It need not be a matter of one consciousness versus independent people.
As the available bandwidth between components increases, you could go
from two independent consciousnesses which are conversing at a rate
similar to what we are familiar with, to the ability to have mental
telepathy and be aware of what the other is seeing and thinking, to
having complete merging of thoughts and perspectives, a joint mind.
> The trick, of course, is making sure that all your isolated selves continue
> to see themselves as parts of the same person. IMO, you need strong SI to
> make such a system actually work.
I don't particularly see this issue as "the trick". With the mental
structures I envision, I'm not sure that questions like, "are you part of
the same person" are meaningful. Our current ideas of mental identity are
very different from those appropriate to a human-intelligence component
of a larger being.
The main requirement is that all the various parts work together
to produce successful results. This may involve a degree of discord
(as when we can't make up our minds, or are torn between conflicting
desires). The individual agents may not be happy with the resulting
decision in every case. But a successful overall mental architecture
must have some mechanism to continue operating smoothly in the face of
internal disagreements. (It seems like some forms of insanity can be
thought of as failures of similar mechanisms in our own minds.)
> Agents with 'varying degrees of independence' is getting into dangerous
> ground, and for little reason. Any sentient agent you create should be a
> subset of the whole. It should think of itself as a part of the distributed
> mind, it should have close mental contact with the rest of that mind when it
> is convenient, and when its task is done it should merge back into it.
I would say that an agent should be as independent and intelligent,
as is appropriate to the situation. I do think there are situations
where a considerable degree of independence is appropriate. Some agents
may be separated from others for long periods of time, in situations
where high bandwidth communication is not feasible.
> If you start making agents that think of themselves as separate people who
> serve you, then you're practicing slavery.
As I said above, I don't think it is meaningful to ask whether these
agents think of themselves as separate people. The question seems to
assume a dichotomy which is not necessarily going to exist in the future.
The very notion of identity becomes slippery if you have new ways of
organizing mental structures.
Hal
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:12 MST