From: Damien Broderick (d.broderick@english.unimelb.edu.au)
Date: Sun Feb 11 2001 - 20:35:12 MST
I wonder if an AI might learn language most efficiently via holophrastic
utterances, as a baby does. Some gestural languages seem to have this
character (but I don't know much about ASL etc; see url below), in which
groups or sequences of gestures are not merely iconic or deictic (pointing
to objects) or pantomime but each encode a whole phrase or sentence, as it
were: who did what to whom. Children's post-babble utterances are often
like this, in which a single word (a verb, say) encodes an elliptical
sentence. Parents pretty quickly pick this up, I'm told.
http://www.courses.fas.harvard.edu/~sa34/lectures/asl2000
The trouble with applying this notion to AI minds-in-boxes is that they
will have an altogether different being-in-the-world to organisms. They
have no inherited template behavioral grammars, no autonomous groping
`babbling' that gets shaped swiftly by feedback from their interaction with
a nurturant and sometimes resistant world. Still, I wonder if AIs might
learn to communicate and build up their interior world-models more rapidly
and effectively if they could be taught in a fluid holophrastic fashion?
Let their robot fingers do the talking.
[cue Eliezer]
Damien Broderick
This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:05:47 MST