> Such revolutionary devices should not be expected in the very near future.
> They will require decoding information from elsewhere in the brain looking
> at signals that are far more complicated than those decoded from the cat's
> thalamus but, in a way, the principle has been demonstrated.
All right. I wasn't going to take the time to write this up even briefly, since I didn't think it would be necessary, but perhaps I was wrong.
Certain thoughts - certain complex intellectual structures, presumably
distributed all over the frontal cortex - reliably create certain
emotions. This is what I call a semantic binding, and it's an amazing
thing, when you think about it. Imagine translating the thoughts into
the kind of propositional-logic semantic networks LISP-based AIs use -
like, "Bob broke my ribs" to "hurt(person72, me)"; which, in one of
those AIs, really translates to "G023(G052, G187)".
(Yes, Eugene Leitl, I know the mind's not a semantic network; that is,
in fact, exactly my point.)
How does the mind know to bind the thoughts to the emotion of anger?
How can the limbic system reliably single out symbol G023? If you look
at the modular and simpler limbic system, and look at the codes it uses
to interface to high-level cognition, you'll find a set of regularities
that exist reliably, and you can use that for the Rosetta Stone of the mind.
Once again, it all boils down to my own little specialty - the interface between emotions and cognition.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way