Re: Putnam's kind of realism

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Nov 03 1999 - 11:54:13 MST


It sounds to me like Putnam, or the person explaining Putnam, or
someone, is failing to clearly distinguish between the question "What is
truth?" and "What is rational?" The truth precedes us, generated us,
and acts according to laws of physics which we cannot specify. There is
an objective answer to the question "What is truth?". Rationality is
the process whereby we attempt to arrive at the truth. There is no
objective answer to the question "What is rational?", not *here*, not
without direct access to objective reality. Rather, "How should
rationality work?" is an engineering question about how to create
systems that can model and manipulate reality - or rather, how to create
parts of reality whose internal patterns mirror the whole, to the point
that the internal process can predict the external processes in advance.

As for AI, it seems to me that the concept of an internalist mental
model is a confusion between "is" and "should". A reference to "green"
*is* the cognitive concept of "green", but what it *should* be, what the
system tries to make it converge to, is the external reality of green.
If you have a wholly internalist system, then the concepts don't
converge to anything. If the only definition of correctness is the
thought itself, there's no way to correct malfunctions. The system is
meta-unstable. The AI thinks "I can't possibly be wrong; anything I
think is by definition correct," and then it gets silly, just like
subjectivist humans. Shades of Greg Egan's _Quarantine_.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way


This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 15:05:42 MST