From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed May 24 2000 - 15:45:08 MDT
What is a codic cortex?
Does Deep Blue have a chess cortex?
Does Kasparov have a sensory modality for chess?
Is a spreadsheet a budgetary cortex?
The analogy between a codic cortex and a visual cortex is intended to be
exact. If there is anything of high-level thought in the visual cortex,
the CaTAI model doesn't know about it (*). The codic cortex doesn't
translate goals into designs; it visualizes high-level features of code
as low-level features of code. More importantly, it's about noticing
low-level features, then mid-level features, then high-level features.
These features aren't intrinsically goal-oriented any more than the edge
of a knife, detected as a contrast between retinal neurons, is a design
feature. First comes the perception of edgeness and sharpness; then
comes thoughts about the usefulness and intentionality of sharpness. A
codic cortex might be able to perceive extremely high-level features
such as "modularity", even have intuitions about qualities like
"elegance" or at least "density", without supporting modularity or
elegance as goals.
A human, looking at a piece of code that plays tic-tac-toe, reads the
code line by line and builds up an understanding of the higher
structures - conditionals, recursion, search trees. We use our
knowledge, our concepts, about what each piece of code would do, to
establish a logical argument that the whole will accomplish some
higher-level purpose. (Like all logical arguments, the code argument
can easily be wrong, in which case the code needs to be debugged.) We
move from the code itself, to our knowledge of what each line is
supposed to do, to our knowledge of what the whole does, to our
knowledge of what the code is supposed to represent (i.e. tic-tac-toe).
Unless the codic structure is very small, we lose track of the
correspondences; we are forced to analyze our abstract knowledge of what
the code is supposed to do, rather than the code itself.
An AI with a codic modality would see all the code at once, and all the
higher features at once, to the limits of working memory. It would have
all the non-intentional perceptions that a human programmer has
consciously developed - or at least, that's the goal. If a dereference
makes no check for null pointers, that's a feature in the same way that
a neural edge is a feature; if the pointers coming in are guaranteed
non-null, or guaranteed to refer to some particular class of objects,
that is a feature. Actually, this is still too crystalline to be a true
perception. One way to do it would be to simulate the action of each
line of code on, say, a hundred typical-special inputs and a thousand
random inputs; this would give a stochastic, distributed image of what
each line of code was doing. And so on. There are a lot of different
methods that can be used; the general idea is to build up a perception
of the code. A codic cortex doesn't write code. It sees code. If the
AI can imagine seeing a particular thing, the codic cortex can visualize
code, but seeing the code - *noticing* the code - comes first.
After that comes forming concepts about the code, seeing how the code
and the low-level features and the high-level features all correspond to
this abstract shape called tic-tac-toe. Only then can you move from
knowing the shape of tic-tac-toe to visualizing a piece of code that
fills it.
Is an optimizing compiler a codic cortex? Is a *decompiler* a codic
cortex? No. A codic cortex would contain some of the same features, do
some of the same things, just as a visual cortex can do some of the same
things as a CAD/CAM program or a video game. Does Deep Blue have a
sensory modality for chess? No. In Deep Blue, the goal and the mental
image are identical, not integrated; more importantly, Deep Blue doesn't
notice what it sees. It can't be said to see the chess search tree
because it isn't performing feature extraction on the search tree, only
the individual boards. Will Deep Blue notice if there are recurring
features in all the chess boards, all the pixels, within the search
tree? Not as I understand Deep Blue's architecture.
Above all, a sensory modality is something that exists within a higher
system. A sensory modality by itself is helpless. Deep Blue wasn't
helpless, therefore Deep Blue is not a sensory modality. As a
definitional matter this is a non-sequitur; as a practical matter, it is
very fundamentally and deeply true. If you implement the higher layers
directly as code, you can't form genuine, flexible higher layers.
Does Kasparov have a chess cortex? Of course not. He has a visual
cortex. He has managed to form extremely powerful concepts about the
chess representations within his visual cortex. What Deep Blue does
through brute force searches, Kasparov does by thinking about the
conceptual layer of the problem. Kasparov perceives and reasons about
deep underlying regularities in the game of chess, rather than the great
search tree itself. Deep Blue and Kasparov are two sides of the coin.
Is a spreadsheet a budgetary cortex? Such uses must be reined in
carefully if the term "sensory modality" is not to lose all meaning and
become just another convenient buzzword. Sensory modalities are part of
a larger system. They support concept formation and ultimately
introspection. A spreadsheet does neither. The fact that a spreadsheet
is very well suited to representing budgets may suggest that it would be
a fine and profitable thing to have an AI with a genuine budgetary
cortex containing some of the same structure as a modern spreadsheet.
But the spreadsheet itself is not a modality, it is a spreadsheet.
-- (*): Some books mention case studies in which people who lose their entire visual cortex literally don't know that they're blind. They not only forget what it is like to see; they forget that the sense of sight exists. This is such a radical proposition that I'd want to see the original research article before depending on its factuality; and, even if true, it would hopefully only mean that visual thoughts needed to be routed through the visual cortex, not that they were stored there. A similar (though less radical) problem is known to exist in which neurological problems paralyze part of the body and the paralysis is denied. -- sentience@pobox.com Eliezer S. Yudkowsky http://singinst.org/beyond.html
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:28:48 MST