"Truth Maintenance," as I recall, was a combination of ideas about
constraints, nonmonotonic logic, and versioning databases as in Conniver.
The original papers would be listed in the MIT AI Lab publications
list (this is a great catalog to have and order from!) from the '70s.
The idea was to have a system that would maintain consistency of its
database of beliefs, given constraints among assertions (e.g., if X is a
bird, X normally has wings), current incomplete knowledge, hypotheses,
and changes to what is known. It would be able to "imagine" (my word)
specified possibilities (what if X had no wings?) and their consequences,
and absorb new information, deriving its consequences, including
contradiction with what was assumed based on default rules (hmm, then how
did X get in here?). The "Truth" that was being maintained was probably
the truth or falsehood of different assertions in hypothetical and/or
changing "worlds" (=versions or views) in the database. Making sure
all the switches get switched right.
--Steve
-- sw@tiac.net Steve Witham www.tiac.net/users/sw under deconstruction "...when activated, it pops a message off the bag and recurs with the tail of the bag." --Vijay Saraswat and Patrick Lincoln