TECH: Posthuman cognitive architectures

From: Billy Brown (bbrown@conemsco.com)
Date: Mon Mar 01 1999 - 14:43:51 MST


hal@rain.org wrote:
> I see the posthuman as having a more complex mental structure than we
> do today. I envision it having multiple parts, with different degrees of
> autonomy, independence, and intelligence. Some parts of the mind will
> have human or sub-human level intelligence, others will be superhuman.
> Parts may be relatively independent of the rest, or they may be tightly
> integrated, or there may be periods of independence followed by periods
> of integration.

You bet. People who live mostly in VR may choose to become highly
concentrated entities, as may those who don't travel or who exist as
software on high-speed planetary networks. However, those of use who want
to travel will need more flexibility.

> This architecture extends the power of the posthuman mind without
> requiring the costly communications and coordination infrastructure that
> would be necessary to bind all parts of the posthuman mind as
> tightly as our own mental structure. The posthuman manipulates the
> world through a small army of agents, all part of it in some sense, all
> controlled by it, but at least some of the time working independently.

"Costly" in what sense? I would expect both the hardware and software to be
inexpensive by the time such things are possible.

The big reason I see for distributed consciousness is light speed delay.
You could easily have parts of yourself scattered across a good fraction of
the solar system, and each component needs to be able to deal with events in
real time. The elegant solution is a system in which components in close
proximity merge into a single consciousness, while those that are isolated
by distance can function as independent people.

The trick, of course, is making sure that all your isolated selves continue
to see themselves as parts of the same person. IMO, you need strong SI to
make such a system actually work.

> The tool must fit the job. This maxim applies to minds as well
> as objects. When constructing or extending our posthuman mental
> architecture, there is no need to provide super-human
> intelligence to all of the agents which will carry out our will. If we
give
> some of them only human intelligence, I see no ethical flaws in that, any
more
> than our own mentality is ethically flawed in delegating tasks to spinal
> cord neural structures which themselves have no hope of advancing to a
> higher state.
>
> This may not be the only possible posthuman mental structure. But I
> see it as a plausible approach, a balance between expensive
> centralized
> control systems and disorganized collections of autonomous agents..
> >From one perspective, it is human slavery. But from another point of
> view, it is a single individual whose mental parts have a degree of
> independence. I hesitate to call this organizational
> structure immoral..

Agents with 'varying degrees of independence' is getting into dangerous
ground, and for little reason. Any sentient agent you create should be a
subset of the whole. It should think of itself as a part of the distributed
mind, it should have close mental contact with the rest of that mind when it
is convenient, and when its task is done it should merge back into it.

If you start making agents that think of themselves as separate people who
serve you, then you're practicing slavery.

Billy Brown, MCSE+I
bbrown@conemsco.com



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:03:12 MST