From: Mark Crosby (crosby_m@rocketmail.com)
Date: Sun Aug 17 1997 - 17:50:01 MDT
Eric Watt Forste writes:
<[Paul M. Churchland's work] Psychological phenomena might easily be
concrete, real, physical, computational phenomena *without* having any
kind of one-to-one correspondence with any of the words we currently
use to talk about psychological phenomena. In other words, they might
*matter* without being *real*, in the sense that there might not
*really* be anything that corresponds to the words. >
This sounds somewhat like a "symbol-grounding" problem. Here’s the
abstract of a fascinating Internet dialog, "Virtual Symposium on the
Virtual Mind", from 1992 [1]:
< When certain formal symbol systems (e.g., computer programs) are
implemented as dynamic physical symbol systems (e.g., run on a
computer) their activity can be interpreted at higher levels . . .
called "virtual" systems. If such a virtual system is interpretable as
if it had a mind, is such a "virtual mind" real? This is the question
addressed in this "virtual" symposium, originally conducted
electronically among four cognitive scientists: Donald Perlis, a
computer scientist, argues that according to the computationalist
thesis, virtual minds are real . . . Stevan Harnad, a psychologist,
argues that . . . virtual minds are just hermeneutic
overinterpretations, and symbols must be grounded in the real world of
objects, not just the virtual world of interpretations. Computer
scientist Patrick Hayes argues that . . . A real implementation must
not be homuncular but mindless and mechanical, like a computer. Only
then can it give rise to a mind at the virtual level. Philosopher Ned
Block suggests that there is no reason a mindful implementation would
not be a real one.>
Harnad has a more formal paper on the symbol-grounding problem [2]
where he says:
< Symbolic representations must be grounded bottom-up in nonsymbolic
representations of two kinds: (1) "iconic representations," which are
analogs of the proximal sensory projections of distal objects and
events, and (2) "categorical representations," which are learned and
innate feature-detectors that pick out the invariant features of
object and event categories from their sensory projections. Elementary
symbols are the names of these object and event categories, assigned
on the basis of their (nonsymbolic) categorical representations.
Higher-order (3) "symbolic representations," grounded in these
elementary symbols, consist of symbol strings describing category
membership relations. Connectionism is one natural candidate for the
mechanism that learns the invariant features underlying categorical
representations, thereby connecting names to the proximal projections
of the distal objects they stand for. In this way connectionism can be
seen as a complementary component in a hybrid nonsymbolic/symbolic
model of the mind, rather than a rival to purely symbolic modeling.>
But, I also like Patrick Hayes’ reasoning about the evolutionary
advantages of ‘virtual machines’ which is nicely summarized in a
recent JCS Online thread [3]:
Hayes cites Sam Salt saying:
< I do not believe that programs 'learn', 'adapt', 'pounce' and so on,
except by very broad analogy to how humans do these things. Programs
mostly implement only three things (sequence, selection and iteration)
by the way in which they can be pushed through a central processing
unit(CPU) which only accepts inputs sequentially.>
Hayes responds:
< [SNIP] Salt, like many electrical engineers, stops at the processor
circuitry . . . One can't reverse engineer software from hardware . .
. Much software doesn't run on the electronic hardware; it runs on one
or another virtual machine . . . [Salt says: regardless of] your
high-level language . . . the low-level implementation uses the same
lowly sequential CPU. [And Hayes responds:] This is to be celebrated!
Here we have a simple physical mechanism which can produce
extraordinarily complex, symbolically significant, behaviors which can
grow and change, without altering the machine at all! All it needs is
more and more memory, which itself is a simple mechanism. Isnt this
more like a brain that any other mechanism we have ever come across? >
Mark Crosby
References:
[1] Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) "Virtual
Symposium on the Virtual Mind". Minds and Machines 2(3) 217-238. From
Harnad’s abstract.
ftp://cogsci.soton.ac.uk/pub/harnad/Harnad/harnad92.virtualmind
[2] Harnad, S. (1990) "The Symbol Grounding Problem", Physica D 42:
335-346.
ftp://cogsci.soton.ac.uk/pub/harnad/Harnad/harnad90.sgproblem
[3] The "Computational Theory and Connectionism" thread on The Journal
of Consciousness Studies Online (early 1996). Pat Hayes, "Rubbing
Salt in the wound",
http://www.zynet.co.uk/imprint/online/hayes3.html
_____________________________________________________________________
Sent by RocketMail. Get your free e-mail at http://www.rocketmail.com
This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:44:44 MST