RE: TECH: Posthuman cognitive architectures
Billy Brown (bbrown@conemsco.com)
Tue, 2 Mar 1999 09:44:18 -0600
hal@rain.org wrote:
> Costly in terms of bandwidth allocation; in terms of
> transmission power;
> in terms of space for antennas and transmission-reception
> equipment within the brain.
In some environments this would be an issue. However, I would expect any
inhabited area to have a very high-capacity wired network. Give each piece
of your hardware a short-range, high-bandwidth communication system capable
of linking into the local wired network, and you can achieve close coupling
at very low cost. Of course, you have to be able to trust your encryption.
> As I said above, I don't think it is meaningful to ask whether these
> agents think of themselves as separate people. The question seems to
> assume a dichotomy which is not necessarily going to exist in the future.
> The very notion of identity becomes slippery if you have new ways of
> organizing mental structures.
Slippery, yes. Irrelevant, no.
Let's say you're a distributed mind, and you need to send something out to
Oort cloud to run a quick errand or two. You have several different ways of
approaching the problem:
- You can put together a functional subset of your own mental processes,
including the higher-level processes that give you self-awareness and a
sense of identity. You may or may not also give it some specialized
abilities for the job at hand. While the agent is out of contact it
operates as a complete individual - it is essentially a simplified version
of the whole 'you'. When it comes back it merges back into the whole, and
its experiences become part of your knowledge base.
- You do the same thing as in case 1, but it is a one-way mission. When
the agent comes back you zero its memory without reading it, and reprogram
the shell from scratch for its next mission.
- You create a specialized, non-sentient AI capable of performing the
mission (there are very few tasks that would actually require sentience per
se). It may or may not include any of your own mental processes, and it may
or may not be retained after it completes its mission.
- You create a sentient agent with a mind specially designed for its
mission. Some of its components are parts of the distributed mind, and it
is capable of operating as part of that mind when the bandwidth is
available. It operates independently while away, and merges back into the
whole when it returns.
- As in case 4, but the agent is completely synthetic (i.e. it does not
contain any components of the parent mind).
- As in case 4, but the agent never operates as part of the whole. It is
capable of doing so, but the link is never actually used. It operates
independently for its entire existence, going from one remote mission to
another.
- As in case 6, but the agent is incapable of merging into the whole. It
lacks the mental facilities for such complete telepathic immersion. It can
only operate as an independent entity, and it develops its own experience
and personality independently of the parent mind.
I would say that cases 1, 3 and 4 are clearly OK. Case 5 doesn't seem
significantly different from 4 to me, but I expect many people will
disagree. Cases 2 and 6 are a bit iffy - IMO we need a deeper understanding
of how minds work before we can make an accurate judgment. However, case 7
is clearly a form of slavery - here you have a sentient being who has its
own identity and is being forced to serve the needs of a different, more
powerful entity.
Billy Brown, MCSE+I
bbrown@conemsco.com