Jim Fehlinger wrote:
>
> Not to be coy, I will admit that I'm thinking in particular about
> the philosophy of seed AI sketched in Eliezer Yudkowsky's CaTAI
> 2.2[.0] ( http://www.singinst.org/CaTAI.html ). I've never quite
> been able to figure out which side of the cognitive
> vs. post-cognitive or language-as-stuff-of-intelligence vs.
> language-as-epiphenomenon-of-intelligence fence this document
> comes down on. There are frustratingly vague hints of **both**
> positions.
I guess that makes me a post-post-cognitivist, then.
Because what I was thinking when I read your preamble was "Hm, the old
debate about symbols, representational power, and so on. I'm well out of
it."
Intelligence ("problem-solving", "stream of consciousness") is built from
thoughts. Thoughts are built from structures of concepts ("categories",
"symbols"). Concepts are built from sensory modalities. Sensory
modalities are built from the actual code.
A classical AI can sort-of represent the "structure" part of "concept
structures", but this is the least interesting part, and there never were
any actual concepts to back them up, just LISP tokens serving as a
placeholder for concepts. No amount of targeting information will enable
you to construct a thought if there are no concepts to construct it with
and no modality workspace for the thought to be constructed in.
"Language", I suppose, would be the innate knowledge of syntax that
combines with our knowledge of semantic possibilities to enable us to
translate a serial stream of concept-invoking words into a structure of
mutually targeting and modifying concepts, such that invoking all the
concepts within the structure creates an articulated thought in the
workspace area of the sensory modalities. For example, "green hotdog",
adjective-noun, "green" modifies "hotdog", "hotdog" fires first, then
"green" fires and modifies it, and the end result is a picture of a green
hotdog mapped out across the neurons of your visual cortex. This picture
is where the best part of the representational power comes in. You can
get a fair degree of representational power from pure deductive reasoning
some of the time, but learning new concepts, or how to deductively
manipulate concepts, requires the capability for sensory reconstruction.
So if you want to know whether CaTAI is symbolic or connectionist AI, the
answer is a resounding "No". This is one of the old, false dichotomies
that helped to cripple the field of AI.
As for emotions, they provide some powerful cognitive capabilities as well
as being the strings whereby our goal systems are manipulated in adaptive
ways. But the cognitive capabilities can be duplicated as simple
reasoning without the need to screw up the entire goal system
architecture. An emotionless human is stupid; an AI is not "emotionless"
but simply a being that reasons and chooses in other ways.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:45 MDT