Automated Collaborative Filtering

From: Crosby_M (CrosbyM@po1.cpi.bls.gov)
Date: Mon Sep 23 1996 - 10:42:00 MDT


On Tuesday, September 17, 1996 11:29AM Alexander 'Sasha' Chislenko wrote
about Innovations: business and academia and Automated Collaborative
Filtering (ACF) agents for the Internet.

Sorry, I don't have anything about business & academia collaborations,
although I did notice that Firefly was mentioned, among only about 3
other companies working on ACF agents, in a recent (I think)
Computerworld article.

However, over the weekend I happened upon an article in my cache from a
2/95 Communications of the ACM Technical Correspondence column "Remark
About Self-Stabilizing Systems" by Xavier Debest of Hochheim, Germany
that might be interesting for your development efforts.

Debest uses a simple example of instructing children (or robots) to form
a circle:

"the quality (i.e., persistence) of the result directly depends on the
sameness of the algorithm (or at least its goal) by all components ...
these results [how to achieve stability with autonomous agents] apply to
any system built from a significant number of components which are
moving and evolving independently from one another, but which are
cooperating or competing together to achieve some common goals ..."

"In an environment where software applications are growing more and more
distributed, where the end user gets more and more responsibility, and
where the duration of the technological cycles is shrinking ... it is
futile to try and enforce rigid development rules from the top ..."

"In regard to the development of application software, the lessons
learned from these experiments could be translated as follows: 1. Don't
overspecify ... 2. Decrease the planning level of long-term activities
... 3. Introduce a dynamic component in the planning of big projects ...
4. Implement 'bottom up' from the existing upwards ... 5. Accept
diversity; heterogeneity may be an asset ... 6. Use a goal (object-)
oriented decomposition methodology."

This is pretty-much accepted practice among software developers today,
so it may not be all that useful. Still, this seems to be useful advice
for other types of futurists on this list as well.

By the way, John Holland's recent book on complex adaptive systems, "The
Hidden Order" is excellent if you haven't read it.

******************
Here's something else I found in my old notes that might be relevant or
at least interesting for ACF and the list in general.

11/02/94. MENTALITY MODES. In the Nov'94 Communications of the ACM,
David Canfield Smith, developer of KidSim at Apple (a graphical system
for programming by demonstration), responds to a letter criticizing his
team for emphasizing visual interfaces to the detriment of language
skills. The letter writer claims, "languages are built from words, not
pictures. Admittedly, some major languages are written in picture form
but the rest of the race considers that to be a severe disadvantage."
Canfield says the letter "raises a valid question: do people think in
images?" as opposed to words. Canfield cites Rudolf Arnheim's "landmark"
1971 book, "Visual Thinking", which argues that words are primarily
useful for (1) their ability to name and quickly distinguish similar
objects, and (2) their adverbial and adjectivial quantification
properties; in short, for helping to distinguish differences. But, for
creative thinking and synthesis, visual imaging is more powerful.

Canfield cites Albert Einstein, who described his mode of thought in a
1945 survey ("The Psychology of Invention in the Mathematical Field" by
Jaques Hadamard) as follows: "The words of the language [digital state
summaries] ... do not seem to play any role in my mechanism of thought.
The psychical entities which seem to serve as elements in thought are
certain signs and more or less clear images which can be 'voluntarily'
reproduced and combined ... Conventional words or other signs have to be
sought for laboriously only in a secondary stage."

Canfield goes on to note: "The educational psychologist Jerome Brunner
argues that people develop three mentalities or ways of thinking, which
he labels enactive, iconic and symbolic." Canfield quotes from Brunner's
1966 book, "Toward a Theory of Instruction": "Any domain of knowledge
... can be represented in three ways: by a set of actions appropriate
for achieving a certain result (enactive representation); by a set of
summary images or graphics that stand for the concept without defining
it fully (iconic representation); and by a set of symbolic or logical
propositions drawn from a symbolic system that is governed by rules or
laws for forming and transforming propositions (symbolic
representation)."

Recent object-oriented analysis methodologies seem to recognize this by
defining dynamic, object and functional models for any system.

In general, it seems to me that the more detailed one's knowledge of the
world becomes, the more structured (and symbolic) it must be, with more
categories. In order to successfully assimilate a greater variety of
information, one's knowledge base is forced to take on a highly
normalized relational structure, as opposed to the more hierarchical or,
at a lower level yet, object-oriented knowledge bases of those less
educated. But, while facilitating analysis and enabling a broader range
of knowledge to be stored, the relational model imposes penalties when
it comes to seeing trends and decision making because there are many
more fragmented knowledge sources that must be joined to produce a
useful synthesis. Organizations are finding that 'multidimensional'
databases, sitting 'on top' of the broader relational database, which
"rely on a hierarchical structure, with sets of data feeding into higher
levels of summary" are necessary for facilitating complex, ad hoc
queries and spotting trends.

In conclusion, it seems that mental development begins (in the infant)
at the enactive level, proceeds to develop iconic representations of
objects and actions (the toddling stage); and 'finally', acquires a
detailed symbology or language for more precise and complex description
of the world (the student stage). BUT, once the detailed language is in
place for continual knowledge gathering, the mind then needs to work
'backward': First, it must build up or extract new icons to represent
more abstract, compound subjects; and then, it must simulate a
cyberspace where these new icons can interact in an enactive manner.
Most of us are able to extract new icons to various extents. However,
the ability to simulate or visualize a higher level world for these new
icons, like Einstein was able to do, is a much rarer skill, one that can
often be inhibitted by the formal rules and constraints of the language
we have previously worked so hard to acquire. Still, it must be
recognized that each mode of thought has its advantages and
disadvantages and all three must be integrated in order to form a
complete and coherent 'picture'.

Mark Crosby



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 14:35:45 MST