Automated theorem proving

Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science.

Logical foundations

While the roots of formalised logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalised mathematics. Frege's Begriffsschrift (1879) introduced both a complete propositional calculus and what is essentially modern predicate logic.[1] His Foundations of Arithmetic, published 1884,[2] expressed (parts of) mathematics in formal logic. This approach was continued by Russell and Whitehead in their influential Principia Mathematica, first published 1910–1913,[3] and with a revised second edition in 1927.[4] Russell and Whitehead thought they could derive all mathematical truth using axioms and inference rules of formal logic, in principle opening up the process to automatisation. In 1920, Thoralf Skolem simplified a previous result by Leopold Löwenheim, leading to the Löwenheim–Skolem theorem and, in 1930, to the notion of a Herbrand universe and a Herbrand interpretation that allowed (un)satisfiability of first-order formulas (and hence the validity of a theorem) to be reduced to (potentially infinitely many) propositional satisfiability problems.[5]

In 1929, Mojżesz Presburger showed that the theory of natural numbers with addition and equality (now called Presburger arithmetic in his honor) is decidable and gave an algorithm that could determine if a given sentence in the language was true or false.[6][7] However, shortly after this positive result, Kurt Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems (1931), showing that in any sufficiently strong axiomatic system there are true statements which cannot be proved in the system. This topic was further developed in the 1930s by Alonzo Church and Alan Turing, who on the one hand gave two independent but equivalent definitions of computability, and on the other gave concrete examples for undecidable questions.

First implementations

Shortly after World War II, the first general purpose computers became available. In 1954, Martin Davis programmed Presburger's algorithm for a JOHNNIAC vacuum tube computer at the Princeton Institute for Advanced Study. According to Davis, "Its great triumph was to prove that the sum of two even numbers is even".[7][8] More ambitious was the Logic Theory Machine in 1956, a deduction system for the propositional logic of the Principia Mathematica, developed by Allen Newell, Herbert A. Simon and J. C. Shaw. Also running on a JOHNNIAC, the Logic Theory Machine constructed proofs from a small set of propositional axioms and three deduction rules: modus ponens, (propositional) variable substitution, and the replacement of formulas by their definition. The system used heuristic guidance, and managed to prove 38 of the first 52 theorems of the Principia.[7]

The "heuristic" approach of the Logic Theory Machine tried to emulate human mathematicians, and could not guarantee that a proof could be found for every valid theorem even in principle. In contrast, other, more systematic algorithms achieved, at least theoretically, completeness for first-order logic. Initial approaches relied on the results of Herbrand and Skolem to convert a first-order formula into successively larger sets of propositional formulae by instantiating variables with terms from the Herbrand universe. The propositional formulas could then be checked for unsatisfiability using a number of methods. Gilmore's program used conversion to disjunctive normal form, a form in which the satisfiability of a formula is obvious.[7][9]

Decidability of the problem

Depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the frequent case of propositional logic, the problem is decidable but co-NP-complete, and hence only exponential-time algorithms are believed to exist for general proof tasks. For a first order predicate calculus, Gödel's completeness theorem states that the theorems (provable statements) are exactly the logically valid well-formed formulas, so identifying valid formulas is recursively enumerable: given unbounded resources, any valid formula can eventually be proven. However, invalid formulas (those that are not entailed by a given theory), cannot always be recognized.

The above applies to first order theories, such as Peano arithmetic. However, for a specific model that may be described by a first order theory, some statements may be true but undecidable in the theory used to describe the model. For example, by Gödel's incompleteness theorem, we know that any theory whose proper axioms are true for the natural numbers cannot prove all first order statements true for the natural numbers, even if the list of proper axioms is allowed to be infinite enumerable. It follows that an automated theorem prover will fail to terminate while searching for a proof precisely when the statement being investigated is undecidable in the theory being used, even if it is true in the model of interest. Despite this theoretical limit, in practice, theorem provers can solve many hard problems, even in models that are not fully described by any first order theory (such as the integers).

A simpler, but related, problem is proof verification, where an existing proof for a theorem is certified valid. For this, it is generally required that each individual proof step can be verified by a primitive recursive function or program, and hence the problem is always decidable.

Since the proofs generated by automated theorem provers are typically very large, the problem of proof compression is crucial and various techniques aiming at making the prover's output smaller, and consequently more easily understandable and checkable, have been developed.

Proof assistants require a human user to give hints to the system. Depending on the degree of automation, the prover can essentially be reduced to a proof checker, with the user providing the proof in a formal way, or significant proof tasks can be performed automatically. Interactive provers are used for a variety of tasks, but even fully automatic systems have proved a number of interesting and hard theorems, including at least one that has eluded human mathematicians for a long time, namely the Robbins conjecture.[10][11] However, these successes are sporadic, and work on hard problems usually requires a proficient user.

Another distinction is sometimes drawn between theorem proving and other techniques, where a process is considered to be theorem proving if it consists of a traditional proof, starting with axioms and producing new inference steps using rules of inference. Other techniques would include model checking, which, in the simplest case, involves brute-force enumeration of many possible states (although the actual implementation of model checkers requires much cleverness, and does not simply reduce to brute force).

There are hybrid theorem proving systems which use model checking as an inference rule. There are also programs which were written to prove a particular theorem, with a (usually informal) proof that if the program finishes with a certain result, then the theorem is true. A good example of this was the machine-aided proof of the four color theorem, which was very controversial as the first claimed mathematical proof which was essentially impossible to verify by humans due to the enormous size of the program's calculation (such proofs are called non-surveyable proofs). Another example of a program-assisted proof is the one that shows that the game of Connect Four can always be won by first player.

Industrial uses

Commercial use of automated theorem proving is mostly concentrated in integrated circuit design and verification. Since the Pentium FDIV bug, the complicated floating point units of modern microprocessors have been designed with extra scrutiny. AMD, Intel and others use automated theorem proving to verify that division and other operations are correctly implemented in their processors.

First-order theorem proving

In the late 1960s agencies funding research in automated deduction began to emphasize the need for practical applications. One of the first fruitful areas was that of program verification whereby first-order theorem provers were applied to the problem of verifying the correctness of computer programs in languages such as Pascal, Ada, etc. Notable among early program verification systems was the Stanford Pascal Verifier developed by David Luckham at Stanford University. This was based on the Stanford Resolution Prover also developed at Stanford using John Alan Robinson's resolution principle. This was the first automated deduction system to demonstrate an ability to solve mathematical problems that were announced in the Notices of the American Mathematical Society before solutions were formally published.

First-order theorem proving is one of the most mature subfields of automated theorem proving. The logic is expressive enough to allow the specification of arbitrary problems, often in a reasonably natural and intuitive way. On the other hand, it is still semi-decidable, and a number of sound and complete calculi have been developed, enabling fully automated systems. More expressive logics, such as Higher-order logics, allow the convenient expression of a wider range of problems than first order logic, but theorem proving for these logics is less well developed.

Benchmarks, competitions, and sources

The quality of implemented systems has benefited from the existence of a large library of standard benchmark examples — the Thousands of Problems for Theorem Provers (TPTP) Problem Library[12] — as well as from the CADE ATP System Competition (CASC), a yearly competition of first-order systems for many important classes of first-order problems.

Some important systems (all have won at least one CASC competition division) are listed below.

  • E is a high-performance prover for full first-order logic, but built on a purely equational calculus, originally developed in the automated reasoning group of Technical University of Munich under direction of Wolfgang Bibel, and now at Baden-Württemberg Cooperative State University in Stuttgart.
  • Otter, developed at the Argonne National Laboratory, is based on first-order resolution and paramodulation. Otter has since been replaced by Prover9, which is paired with Mace4.
  • SETHEO is a high-performance system based on the goal-directed model elimination calculus, originally developed by a team under direction of Wolfgang Bibel. E and SETHEO have been combined (with other systems) in the composite theorem prover E-SETHEO.
  • Vampire is developed and implemented at Manchester University by Andrei Voronkov and Krystof Hoder, formerly also by Alexandre Riazanov. It has won the CADE ATP System Competition in the most prestigious CNF (MIX) division for eleven years (1999, 2001–2010).
  • Waldmeister is a specialized system for unit-equational first-order logic developed by Arnim Buch and Thomas Hillenbrand. It won the CASC UEQ division for fourteen consecutive years (1997–2010).
  • SPASS is a first order logic theorem prover with equality. This is developed by the research group Automation of Logic, Max Planck Institute for Computer Science.

The Theorem Prover Museum is an initiative to conserve the sources of theorem prover systems for future analysis, since they are important cultural/scientific artefacts. It has the sources of many of the systems mentioned above.

Software systems

Comparison
NameLicense typeWeb serviceLibraryStandaloneLast update (YYYY-mm-dd format)
ACL23-clause BSDNoNoYesMay 2019
Prover9/OtterPublic DomainVia System on TPTPYesNo2009
MetisMIT LicenseNoYesNoMarch 1, 2018
MetiTarskiMITVia System on TPTPYesYesOctober 21, 2014
Jape GPLv2YesYesNoMay 15, 2015
PVS GPLv2NoYesNoJanuary 14, 2013
Leo IIBSD LicenseVia System on TPTPYesYes2013
EQP?NoYesNoMay 2009
SAD GPLv3YesYesNoAugust 27, 2008
PhoX?NoYesNoSeptember 28, 2017
KeYmaeraGPLVia Java WebstartYesYesMarch 11, 2015
Gandalf?NoYesNo2009
EGPLVia System on TPTPNoYesJuly 4, 2017
SNARK Mozilla Public License 1.1NoYesNo2012
VampireVampire LicenseVia System on TPTPYesYesDecember 14, 2017
Theorem Proving System (TPS)TPS Distribution AgreementNoYesNoFebruary 4, 2012
SPASSFreeBSD licenseYesYesYesNovember 2005
IsaPlannerGPLNoYesYes2007
KeYGPLYesYesYesOctober 11, 2017
Princesslgpl v2.1Via Java Webstart and System on TPTPYesYesJanuary 27, 2018
iProverGPLVia System on TPTPNoYes2018
Meta TheoremFreewareNoNoYesApril 2020
Z3 Theorem ProverMIT LicenseYesYesYesNovember 19, 2019

Free software

Proprietary software

gollark: <@331320482047721472> Hello²Boi³³³.
gollark: I assume this was merely because I said something other than "hi".
gollark: Just do not provide a real address.
gollark: <@331320482047721472> HelloBoi
gollark: Yes.

See also

Notes

  1. Frege, Gottlob (1879). Begriffsschrift. Verlag Louis Neuert.
  2. Frege, Gottlob (1884). Die Grundlagen der Arithmetik (PDF). Breslau: Wilhelm Kobner. Archived from the original (PDF) on 2007-09-26. Retrieved 2012-09-02.
  3. Bertrand Russell; Alfred North Whitehead (1910–1913). Principia Mathematica (1st ed.). Cambridge University Press.
  4. Bertrand Russell; Alfred North Whitehead (1927). Principia Mathematica (2nd ed.). Cambridge University Press.
  5. Herbrand, Jaques (1930). Recherches sur la théorie de la démonstration.
  6. Presburger, Mojżesz (1929). "Über die Vollständigkeit eines gewissen Systems der Arithmetik ganzer Zahlen, in welchem die Addition als einzige Operation hervortritt". Comptes Rendus du I Congrès de Mathématiciens des Pays Slaves. Warszawa: 92–101.
  7. Davis, Martin (2001), "The Early History of Automated Deduction", in Robinson, Alan; Voronkov, Andrei (eds.), Handbook of Automated Reasoning, 1, Elsevier)
  8. Bibel, Wolfgang (2007). "Early History and Perspectives of Automated Deduction" (PDF). Ki 2007. LNAI. Springer (4667): 2–18. Retrieved 2 September 2012.
  9. Gilmore, Paul (1960). "A proof procedure for quantification theory: its justification and realisation". IBM Journal of Research and Development. 4: 28–35. doi:10.1147/rd.41.0028.
  10. W.W. McCune (1997). "Solution of the Robbins Problem". Journal of Automated Reasoning. 19 (3): 263–276. doi:10.1023/A:1005843212881.
  11. Gina Kolata (December 10, 1996). "Computer Math Proof Shows Reasoning Power". The New York Times. Retrieved 2008-10-11.
  12. Sutcliffe, Geoff. "The TPTP Problem Library for Automated Theorem Proving". Retrieved 15 July 2019.
  13. Bundy, Alan. The automation of proof by mathematical induction. 1999.
  14. Artosi, Alberto, Paola Cattabriga, and Guido Governatori. "Ked: A deontic theorem prover." Eleventh International Conference on Logic Programming (ICLP’94). 1994.
  15. Otten, Jens; Bibel, Wolfgang (2003). "LeanCoP: Lean connection-based theorem proving". Journal of Symbolic Computation. 36 (1–2): 139–161. doi:10.1016/S0747-7171(03)00037-3.
  16. del Cerro, Luis Farinas, et al. "Lotrec: the generic tableau prover for modal and description logics." International Joint Conference on Automated Reasoning. Springer, Berlin, Heidelberg, 2001.
  17. Hickey, Jason, et al. "MetaPRL–a modular logical environment." International Conference on Theorem Proving in Higher Order Logics. Springer, Berlin, Heidelberg, 2003.
  18. Mathematica documentation

References

  • Chin-Liang Chang; Richard Char-Tung Lee (1973). Symbolic Logic and Mechanical Theorem Proving. Academic Press.
  • Loveland, Donald W. (1978). Automated Theorem Proving: A Logical Basis. Fundamental Studies in Computer Science Volume 6. North-Holland Publishing.
  • Luckham, David (1990). Programming with Specifications: An Introduction to Anna, A Language for Specifying Ada Programs. Springer-Verlag Texts and Monographs in Computer Science, 421 pp. ISBN 978-1461396871.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.