From: Ben Goertzel (ben@intelligenesis.net)
Date: Thu Dec 14 2000 - 16:29:53 MST
> The counterargument for CaTAI is the same as the counterargument for
> humans - that we are unified minds and that our subgoals don't have
> independent volition. When was the last time your mind was taken over by
> your auditory cortex? Maybe you once had a tune you couldn't get out of
> your head, but there's a difference between a subprocess exhibiting
> behavior you don't like, and hypothesizing that a subprocess will exhibit
> conscious volitional decision-making. The auditory cortex may annoy you
> but it cannot plot against you; it has a what-it-does, not a will.
The auditory cortex is not a subgoal or subprocess/substructure set up by
the mind, it's
a subprocess/substructure set up by evolution...
Sexuality is a subgoal set up by evolution, which has overtaken its
supergoal (procreation)
in many cases. Elsewise, people would never use birth control.
Sometimes the subgoal should take over for the goal. If your goal is to
write a book, and
a subgoal of this is to solve a certain intellectual problem, you may find
out that the problem
itself is more interesting than book-writing... give up the book-writing
project and devote yourself
only to the problem. The subgoal then replaces its supergoal as a way of
achieving the supersupergoal
of amusing oneself, stimulating one's mind, or whatever...
> *The* counterargument for transhumans is that the whole idea of identity
> and identifying is itself an anthropomorphism. Why aren't we worried that
> the transhuman's goal system will break off and decide to take over,
> instead of being subservient to the complete entity? Why aren't we
> worried about individual functions developing self-awareness and deciding
> to serve themselves instead of a whole? You can keep breaking it down,
> finer and finer, until at the end single bytes are identifying with
> themselves instead of the group... something that would require around a
> trillion percent overhead, speaking of infinite memory.
No, a single byte is not really capable of containing a self-model and the
processes
for maintaining this self-model.
But I have clearly lost track of your thread, at this point .. sorry...
I do believe that "alienated subgoals" are an inevitable part of
intelligence, but I also suspect
that this phenomenon can be reduced to a much lower level than we see in the
human mind, in a transhuman
AI.
ben
This archive was generated by hypermail 2.1.5 : Wed Jul 17 2013 - 04:00:35 MDT