Reason maintenance
Reason maintenance[1][2] is a knowledge representation approach to efficient handling of inferred information that is explicitly stored. Reason maintenance distinguishes between base facts, which can be defeated, and derived facts. As such it differs from belief revision which, in its basic form, assumes that all facts are equally important. Reason maintenance was originally developed as a technique for implementing problem solvers.[2] It encompasses a variety of techniques that share a common architecture:[3] two components—a reasoner and a reason maintenance system—communicate with each other via an interface. The reasoner uses the reason maintenance system to record its inferences and justifications of ("reasons" for) the inferences. The reasoner also informs the reason maintenance system which are the currently valid base facts (assumptions). The reason maintenance system uses the information to compute the truth value of the stored derived facts and to restore consistency if an inconsistency is derived.
A truth maintenance system, or TMS, is a knowledge representation method for representing both beliefs and their dependencies and an algorithm called the "truth maintenance algorithm" that manipulates and maintains the dependencies. The name truth maintenance is due to the ability of these systems to restore consistency.
A truth maintenance system maintains consistency between old believed knowledge and current believed knowledge in the knowledge base (KB) through revision. If the current believed statements contradict the knowledge in the KB, then the KB is updated with the new knowledge. It may happen that the same data will again be believed, and the previous knowledge will be required in the KB. If the previous data are not present, but may be required for new inference. But if the previous knowledge was in the KB, then no retracing of the same knowledge is needed. The use of TMS avoids such retracing; it keeps track of the contradictory data with the help of a dependency record. This record reflects the retractions and additions which makes the inference engine (IE) aware of its current belief set.
Each statement having at least one valid justification is made a part of the current belief set. When a contradiction is found, the statement(s) responsible for the contradiction are identified and the records are appropriately updated. This process is called dependency-directed backtracking.
The TMS algorithm maintains the records in the form of a dependency network. Each node in the network is an entry in the KB (a premise, antecedent, or inference rule etc.) Each arc of the network represent the inference steps through which the node was derived.
A premise is a fundamental belief which is assumed to be true. They do not need justifications. The set of premises are the basis from which justifications for all other nodes will be derived.
There are two types of justification for a node. They are:
- Support list [SL]
- Conditional proof (CP)
Many kinds of truth maintenance systems exist. Two major types are single-context and multi-context truth maintenance. In single context systems, consistency is maintained among all facts in memory (KB) and relates to the notion of consistency found in classical logic. Multi-context systems support paraconsistency by allowing consistency to be relevant to a subset of facts in memory, a context, according to the history of logical inference. This is achieved by tagging each fact or deduction with its logical history. Multi-agent truth maintenance systems perform truth maintenance across multiple memories, often located on different machines. de Kleer's assumption-based truth maintenance system (ATMS, 1986) was utilized in systems based upon KEE on the Lisp Machine. The first multi-agent TMS was created by Mason and Johnson. It was a multi-context system. Bridgeland and Huhns created the first single-context multi-agent system.
See also
- Artificial intelligence
- Belief revision
- Knowledge acquisition
- Knowledge representation
- Neurath's boat
References
- Doyle, J., 1983. The ins and outs of reason maintenance, in: Proceedings of the Eighth International Joint Conference on Artificial Intelligence - Volume 1, IJCAI’83. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, pp. 349–351.
- Doyle, J.: Truth maintenance systems for problem solving. Tech. Rep. AI-TR-419, Dep. of Electrical Engineering and Computer Science of MIT (1978)
- McAllester, D.A.: Truth maintenance. AAAI90 (1990)
Other references
- Bridgeland, D. M. & Huhns, M. N., Distributed Truth Maintenance. Proceedings of. AAAI–90: Eighth National Conference on Artificial Intelligence, 1990.
- J. de Kleer (1986). An assumption-based TMS. Artificial Intelligence, 28:127–162.
- J. Doyle. A Truth Maintenance System. AI. Vol. 12. No 3, pp. 251–272. 1979.
- U. Junker and K. Konolige (1990). Computing the extensions of autoepistemic and default logics with a truth maintenance system. In Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI'90), pages 278–283. MIT Press.
- Mason, C. and Johnson, R. DATMS: A Framework for Assumption Based Reasoning, in Distributed Artificial Intelligence, Vol. 2, Morgan Kaufmann Publishers, Inc., 1989.
- D. A. McAllester. A three valued maintenance system. Massachusetts Institute of Technology, Artificial Intelligence Laboratory. AI Memo 473. 1978.
- G. M. Provan (1988). A complexity analysis of assumption-based truth maintenance systems. In B. Smith and G. Kelleher, editors, Reason Maintenance Systems and their Applications, pages 98–113. Ellis Horwood, New York.
- G. M. Provan (1990). The computational complexity of multiple-context truth maintenance systems. In Proceedings of the Ninth European Conference on Artificial Intelligence (ECAI'90), pages 522–527.
- R. Reiter and J. de Kleer (1987). Foundations of assumption-based truth maintenance systems: Preliminary report. In Proceedings of the Sixth National Conference on Artificial Intelligence (AAAI'87), pages 183–188. PDF