Gödel's incompleteness theorems

Gödel's incompleteness theorems demonstrate that, in mathematics, it is impossible to prove everything.

Cogito ergo sum
Logic and rhetoric
Key articles
General logic
Bad logic
v - t - e

More specifically, the first incompleteness theorem states that, in any consistent formulation of number theory which is "rich enough" there are statements which cannot be proven or disproven within that formulation. The second incompleteness theorem states that number theory cannot be used to prove its own consistency.

The theorem applies also to any theory which includes number theory, as long as the theory is consistent and as long as the theory is expressed as is usual in mathematics, following rules such as that the axioms and proof procedures are determined from the start and the expressions are of finite length. One major example of such a larger theory in mathematics is set theory, for in set theory one can define numbers and the operations on numbers, and prove the ordinary principles of arithmetic.

Idea of proof

Kurt Gödel (1906–1978) demonstrated this by encoding the liar paradox into number theory itself, creating a well-formed mathematical statement that referred to itself as an unprovable statement. By the assumption of consistency, we know that this statement is true (for, if it were false, then it could be proven, which would be inconsistent). But in virtue of its being true, it cannot be proven (for that is what it says). The final link in the chain of reasoning is the notion of "rich enough," which means that a system contains enough formalism as to be able to describe a statement which refers to itself as an unprovable statement. This is achieved, in part, by showing that (1) statements in arithmetic can be associated with numbers in arithmetic and (2) a proof in arithmetic can be shown to correspond to arithmetical computations on those associated numbers.

In case that you think you can get around this by adding this true (but unprovable) statement as an additional axiom in arithmetic (after all, you know that it is true), what happens is that the proof changes so that it generates yet another statement that refers to its own unprovability from the new, enlarged set of axioms.

Common misconceptions

Not every mathematical theory is necessarily incomplete

The "arithmetic" that the theorem refers to is more than just addition, subtraction, multiplication and division with whole numbers. It also includes statements about "all numbers" or "some numbers," for example, statements about prime numbers; "there is no largest prime number." And there are parts of arithmetic which can be proven to be complete (there is one such part which excludes multiplication), as well as other interesting and complicated areas of mathematics which have been proven to be complete and consistent. So one should be careful when saying that "arithmetic" or "mathematics" is incomplete. Some mathematical theories are complete, for example, Euclidean geometry; its completeness does not contradict Gödel's theorem because geometry does not contain number theory. Also, there is even a proof that arithmetic (in the sense of the incompleteness theorems) is consistent; but that proof relies on methods that go beyond that arithmetic.

"True but unprovable"?

People tend to get confused about the assertion that Gödel's statement is "true but unprovable". In particular, what Gödel's theorem absolutely definitely most certainly doesn't say is that humans possess some kind of superior unformalizable intuition that allows them to see mathematical truths that cannot be captured by "mere math" or "mere logic".

In first-order logic, Gödel's completeness theorem says that every formula that is logically valid roughly speaking, true in every model is syntactically provable. Thus, every formula that is necessarily true in every model of first-order arithmetic is provable from the axioms of first-order arithmetic. And Gödel's statement is, in fact, not true in every model of first-order arithmetic: it is true in some models and false in others.

The problem is that first-order arithmetic is not powerful enough to capture one specific definition of natural numbers and restrict it only to the standard model of arithmetic, the ordinary natural numbers we all know and love (0, 1, 2, …).[note 1] Gödel's statement happens to be true in the standard model, but in non-standard models, in addition to the standard numbers, there are other numbers not reachable by repeatedly incrementing from 0, chains of extra numbers that extend infinitely in both directions (similar to, but distinct from, integer numbers). In non-standard models, there are Gödelian encodings of proofs that do not, in general, adequately map to valid logical proofs it also allows infinite chains that decode into something like "Gödel's statement is true, because not-not-Gödel's statement is true, because not-not-not-not-Gödel's statement is true, ad infinitum". So there are non-standard models where Gödel's statement is, in fact, false: they have "proof encodings" that actual first-order logic would not accept as proofs.

Unlike first-order logic, second-order logic does not have an analogue of the completeness theorem. In particular, while second-order arithmetic is powerful enough to describe only the standard model of arithmetic and eliminate all non-standard numbers, there are formulas that are true but cannot be proven from the axioms of second-order arithmetic using second-order logic. But then, humans cannot prove them either; they are not more powerful in this respect than computer programs or any other formalized process.

God and other "unknowables"

Some people get tempted to use Gödel's theorem as an escape hatch for their own pet theories that they consider "true but unprovable". Math cannot prove everything, therefore logical discussion of God is futile, so there! However, Gödel's theorem has a precise mathematical formulation, and so do the mathematical concepts of logical truth and provability; to even consider the truth or provability of a statement, it first needs to be formalized in the language of mathematical logic. "God", as an idea grounded in our imprecise maps of the real world, is clearly not a well-defined logical formula whose truth or falsehood is even meaningful to consider as a consequence of purely mathematical theories. This argument falls into not even wrong territory.

gollark: The really hard part which I'm still working on is actual crafting.
gollark: It only takes a few seconds.
gollark: Look, it's easy to *parse* the recipe dump.
gollark: Ah, well. That doesn't actually help at all but whatEver™.
gollark: Which giant JSoN file?

See also

Notes

  1. First-order logic in general is very limited in pinning down specific models; by the Löwenheim–Skolem theorem,File:Wikipedia's W.svg any first-order theory with an infinite model (and first-order arithmetic is obviously one of these) has an infinite number of meaningfully different infinite models. Heck, said theorem even proves there exists a countable model of the axioms of real numbers, despite Cantor's proofFile:Wikipedia's W.svg that the set of real numbers is uncountable.
This article is issued from Rationalwiki. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.