From: Eric Watt Forste (arkuat@pobox.com)
Date: Mon Feb 03 1997 - 12:05:46 MST
Eliezer Yudkowsky writes:
> Our mind contains cognitive objects called 'symbols'. Nobody knows
>where or how they are stored, except that we think the hippocampus
>creates them and the cerebellum retrieves them.
I thought the cerebellum was involved in fine-scale motor coordination,
not in conceptual recall. Are you sure you aren't meaning to say
"cerebral cortex" instead of cerebellum? That's a guess on my
part... you've really lost me.
> The network does ground out, because semantic primitives other than
>symbols exist. As a general rule the non-symbolic primitives consist of
>either some experiences, or a transformation performed on the current
>working memory or visualization, often by attempting to alter the
>visualization so that it is analogous to abstracted extracts from
>previous experience.
This is an interesting tack to take, but your assertion that
"experiences" or "transformations performed on the current working
memory or visualization" are nonsymbolic is unsupported. The
distinction you are trying to draw here seems spurious to me. I
suspect that where you are using the Hofstadterian word "symbol"
I prefer the word "concept", but it's not as if we have pinned
these things down in the physical brain yet, so we can use what
words we choose, I suppose.
> Classical AIs have no visualizational facilities and their symbols are
>all defined in terms of other symbols, which is why classical AI is such
>a horrible model of the mind.
The formation of concepts from the interaction between nonlinguistic
percepts and linguistic percepts is still quite poorly understood.
Your criticism above fails to touch (for instance) Moravec's robots,
since his machines do have visualizational facilities and their
"symbols" (though actually they're probably much closer to naked
percepts than to linguistically informed concepts) are constructed
on the basis of data from these sensorics, and yet don't come
anywhere near human minds yet. They are impressively moving down the
sphexishness axis, though.
I suppose I'm going to have to read Calvin's THE CEREBRAL CODE now, as
it seems to be the latest hot book. (I wish Edelman were a better
writer.) (And after rereading what I've written below, I'm thinking that
I'm long overdue to get around to Bateson.)
>Definition (n): An explanation intended to convey all
>information necessary for the formation of a symbol.
I think the word "symbol" is a bit vague for this context, but I
understand your wanting to follow Hofstadter's jargon. Perhaps a
definition is an explanation intended to convey all *linguistic*
perceptual information necessary for the initial differentiation
of a concept from a preexisting (in the audience's mind) seed
concept... hence the traditional Aristotelian emphasis on genus
and differentia in definitions. But definitions, being linguistic
constructs, clearly cannot contain the nonlinguistic perceptual
information necessary for the full and accurate differentiation of
the new concept from the seed concept. Because the words used
*within* the linguistic definition are presumably connected to
nonlinguistic perceptual memories associated with their initial
formation and development, a definition of a new word is an attempt
to bootstrap nonlinguistic perceptual information (associated with
the words used to construct the definition) into the newly
differentiating concept.
Clearly, the most effective definition is going be as dependent on
the conceptual resources available for exploitation within the
audience's mind as it is on the actual "content" of the concept
within the speaker's mind being defined.
Let me add my usual caveat that by "mind" I mean the information
process or system instantiated by a human physical nervous system,
and not anything particularly mystical.
-- Eric Watt Forste ++ arkuat@pobox.com ++ http://www.pobox.com/~arkuat/
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:08 MST