Pseudomathematics
Pseudomathematics is any work, study or activity which claims to be mathematical, but refuses to work within the standards of proof and rigour which mathematics is subject to. Much like other pseudoscience, it often relies on ignoring proven facts and methods, making unsubstantiated claims of fact and ignorance and rejection of the work of experts. Unfortunately for practitioners of pseudomathematics, mathematics is an absolute science of black and white — everything is right or wrong (but sometimes fuzzy). There is not often scope for debate or discussion, as only mathematical proof is relevant.
Style over substance Pseudoscience |
Popular pseudosciences |
Random examples |
v - t - e |
Pseudomathematics takes multiple forms, often focusing on disproving accepted facts or proving things which have been proven impossible. While a conventional mathematician is welcome to reject and attempt to disprove theories or prove something, they must work within the rigour and framework of mathematics. To attempt to refute a theorem, one must prove it to be false or find an error in the given proof. One cannot simply make an argument against it. Likewise, modern mathematics has proven various things to be impossible, and so to attempt to prove it without addressing the alleged impossibility is folly.
Common claims among pseudo-mathematicians often involves refutations of the works of Gödel and Cantor; attempts to solve compass-and-straightedge problems which were proven to be impossible in the 1800s; attempts to change the values of mathematical constants or to question the accepted nature of irrationality, transcendence or complex numbers. Pseudo-mathematicians may use convincing and sophisticated mathematical vocabulary, however their theories using such terms tend to be not even wrong due to the highly specific nature of the terms.
Compass and straightedge constructions
A common feature of pseudomathematics, and pseudoscience in general, is attempted solutions to problems which the layman can easily understand but actually involves very complicated science. Problems which anybody can understand, but only advanced mathematicians can provide true insight into, attract a whole range of cranks offering different theories.
There is perhaps no greater example of this than certain compass and straightedge construction problems. These problems involve setting a task, to be completed with only a straightedge (a marked or unmarked ruler, depending on the problem) and a compass. The most common problem is 'squaring the circle' — given a circle of radius '1 unit', construct a square of equal area. Obviously, this problem seems easy to understand. Celebrated mathematicians including Archimedes, Pythagoras and Euclid were obsessed with this problem. Unfortunately, it was proven to be impossible in the 1800s due to the transcendental-ness of pi.[1][2]
To this day, attempts to 'square the circle,' as well as attempts to 'trisect the angle' and 'double the cube' are still attempted by amateur mathematicians. Because you and I can understand these problems, but perhaps cannot understand why they are impossible, they seem within the reach of the layman, if he tries hard enough. Not a single credible mathematician would doubt the proof of the impossibility of these problems, and American Professor Underwood Dudley
Areas attracting crank ideas
Elementary proof
Elementary proof refers to the use of only basic mathematics, and has different meanings in different fields. In number theory, an elementary proof is a proof without complex analysis. It is important to remember that no proof in mathematics is "less valid" than another — it is correct or incorrect. Despite that, many crank mathematicians and fringe engineers will reject or denigrate a proof which isn't 'elementary,' probably because they can't understand it. The use of complex numbers or even proof by contradiction (reductio ad absurdum) have been called into question, despite their complete acceptance by modern mathematicians. Opposing proofs by contradiction is usually based on appealing to a trend in mathematics called intuitionism, but while intuitionism does not allow proofs by contradiction, its proponents do not consider it "unmathematical". They are just interested to find out what could be proven by explicit constructions, for such proofs often have more applications. Nevertheless, no respected mathematician denies that proving by contradiction is an important mathematical tool, which has been accepted since at least the time of Plato.
Complex numbers
“”Die ganzen Zahlen hat der liebe Gott gemacht, alles andere ist Menschenwerk. (God made the integers, all else is the work of man.) |
—Leopold Kronecker |
Complex numbers rely on the imaginary unit. The imaginary unit, when squared, equals -1. While not being inherently any less "real" than real numbers or even negative numbers, the poor choice of name for the imaginary part of a complex number has made them a popular target for math denialists. Any sort of number other than positive integers are abstractions of quantitative properties that have no direct physical meaning, but make mathematical reasoning much more tractable. Specifically, the complex number field admits a lot of convenient (and beautiful) properties that make it a natural choice to represent a lot of abstractions. Despite this, 'Complex number denial' seems to exist across the internet. Even some of those that do accept that imaginary numbers "exist", question their validity outside of mathematics, completely ignoring their main use in physics[3].
Fermat's Last Theorem
The quest to solve 'Fermat's Last Theorem' (no three positive integers a, b, and c can satisfy the equation an + bn = cn for any integer value of n greater than two.) famously began when Pierre de Fermat claimed that he could prove it, but that he lacked the space to write the proof in a margin. It took until the 1990s and significant mathematical advancements to prove Fermat's Last Theorem; in the intervening period, this being a problem that can be understood by the layman and a proof which is very advanced, cranks have often popped up claiming to be able to prove the theorem using elementary mathematics. The likeliness of there being an easy proof for such a well known and old problem that has just been overlooked for centuries is of course minuscule, not least because Fermat's wording indicated a handsome, elegant, simple proof and not what was ultimately found.
The value of π
The value of π (pi), which was proven to be irrational in 1761 and transcendental in 1873, has attracted hundreds of claims of an exact value from pseudomathematicians, cranks and laymen. Bill number 246 of the 1897 sitting of the Indiana State Legislature actually tried to (indirectly) set the value of π. The bill would have stated, as law, a correct method of squaring the circle.[4] This method was in fact true, as long as π is held to be 3.2. Sadly, π is not 3.2.[5] This should not be confused with an assertion that some legitimate mathematicians make that "π is wrong" and that people should use τ (tau) instead. That argument doesn't contest the value of π or its importance, but simply argues that τ (which is equal to 2π) is more comprehensible for teaching mathematics and simplifies trigonometry. The tau vs. pi argument is not a mathematical debate, but a notational one (and it actually makes some interesting points).
Attempts to refute accepted theories
Being critical of accepted theories or attempting to disprove them is not intrinsically pseudoscientific; that sort of scepticism plays an important role in the scientific process. However, many crank mathematicians will attempt to refute accepted theories through verbal arguments, through visual "proofs" or alleged "proofs" which neither deal with the complexity of the issue nor point out the error in the accepted proof. While these crank approaches seem to turn up in many areas of mathematics, particular theorems and theories tend to attract the ire of cranks through the centuries. Once again, this commonly occurs with theories whose premise can be understood by the layman, but whose complexities cannot. The works of Gödel and Cantor seem to attract a lot of internet cranks.
Gödel's incompleteness theorems
Gödel's two incompleteness theorems establish limitations which are inherent in any axiomatic system except for the most trivial.[6] This means that however complete our axiomatic system (the basis of mathematics) becomes, there will always be statements which are independent in the sense that they could be either true or false without being inconsistent with the rest of the axioms. Gödel's theorems were very controversial when first published in 1931; the mathematical establishment at that time inclined to the belief that everything that was true could be proven. This position was most famously enunciated by David Hilbert when he said, "Wir müssen wissen. Wir werden wissen." (German for: "We must know. We will know."). Despite this, Gödel's enormous upheaval of man's understanding of mathematics settled down relatively quickly as no serious mathematicians made attempts to disprove his work, accepting the proof.[7] Today, the professional mathematicians have fully accepted the incompleteness theorems, but a certain breed of crank seems attracted to disproving them.[8]
Cantor, set theory and infinity
The work of Georg Cantor (1845-1918) on set theory and infinity (∞) now forms an important part of the basis of mathematics. Cantor's insights were revolutionary, redefining how we view infinity. Through his study of one-to-one correspondence, it became clear that there are different kinds of infinity. In layman's terms, two things could both be infinite, and yet there could be more of one than the other. For example, ∞ is the same as ∞+∞, but ∞ is not the same as ∞∞. The idea that there are different kinds of infinity doesn't seem to rest well with some people. Disproof offers of his theories range from barely mathematical to not even wrong. These attempts have come from amateur[9] and professional[10] mathematicians alike.
The fucking 0.999… shit
Under the standard definition and notation for the real numbers, it has been well established that 0.999… (9 repeating) = 1.[11] Many, many heated internet arguments have taken place over this.
Millennium Problems
In 2000, the Clay Mathematics Institute offered $1,000,000 to anybody who could solve one of seven open questions in mathematics.[12] Six remain unproven.[13] In the decade since 2000, well-intentioned amateurs have attempted a whole host of attempts to solve these problems, especially the Riemann Hypothesis.[14] The fact is that these problems are incredibly difficult to understand without university-level mathematical education.
Riemann Hypothesis
Often described as the most important open problem in mathematics, the Riemann Hypothesis relates to the behaviour of the Riemann zeta function. It has been proven that, if the Riemann Hypothesis is true, then certain statements about prime numbers are also true. Because of this, it is of huge importance to mathematicians. Because the problem (or at least the consequences for our understanding of prime numbers) can be understood fairly easily, dozens of 'proofs' of the hypothesis are published to the Internet regularly, normally not dealing with any of the depth of the actual problem. A British academic has collected many such attempted proofs.[14]
P vs. NP problem
The P vs. NP problem deals with the minimum complexity (i.e. relation between input size and running time) of optimal algorithms solving certain kinds of computational problems.
In computer science, the complexity of an algorithm is referred to as its "order", abbreviated with a capital O. Say, for example, your problem is sorting a list of n items, and you write an algorithm that takes any list of n items and sorts them. If the computation time is directly proportional to the number of items that have to be tested (i.e. doubling the number of items doubles the computation time), we say the algorithm is O(n). If the computation time is proportional to the number of items squared (i.e. doubling the number of items quadruples the computation time), we say the algorithm is O(n2). In the case of sorting an arbitrary list, the fastest algorithms tend to be O(n log n). In the case of searching through an ordered list to see if a specific item is there or not, the fastest algorithm is O(log n).
"P" stands for Polynomial. A computational problem is considered "in P" if an algorithm exists that can solve the problem in "polynomial time" — that is, it's O(n), or O(n2), or O(n3), or any order where the n is raised to some fixed power. If, however, the fastest algorithm is something like O(2n), where the n appears as an exponent, then the problem isn't being solved in polynomial time and isn't "in P".
"NP" stands for Non-deterministically Polynomial. A computational problem is "in NP" if an algorithm exists that can solve the problem in polynomial time on an unlimited number of computers running in parallel. A very special subset of NP problems are those considered "NP-complete." These are problems which can all be transformed into one another; the fastest algorithm for solving any one of them can be used to solve them all. Examples of NP-complete problems include the decision version of the Traveling Salesman Problem ("is there a route shorter than n length along this interconnected network of nodes in which the salesman visits each node exactly once?"), and the Binomial Expansion Problem ("for an equation of n variables, is there an exponent for each one such that we arrive at a given set of results?").
P vs. NP is the question of whether these two sets of problems are equal, that is, if there exist polynomial-time algorithms for problems in NP. This is hugely important to theoretical computer science, and has attained a mythical status as the biggest problem in computer science. For this reason, it attracts a lot of outside attention and a correspondingly large number of cranks. Because of this, various computer scientists and geeks alike have taken a huge interest in the problem and attempted to prove it one way or the other. Sadly, the problem is very unlikely to be resolved without an advanced level of understanding. Scott Aaronson's article gives a good overview. The direct practical impact of a proof that P = NP is most likely limited, however, for the simple reason that "polynomial time" is not the same as "fast" for practical purposes (e.g. n10 is greater than 2n for all n below 58, at which point the algorithm requires a multiple of 5×1017 operations and is intractable regardless). Notable computer scientist Donald Knuth holds the opinion that most likely P is equal to NP, but the proof of it will be non-constructive and have no direct impact on the state of algorithmic research.[15]
See also
- Conservapedian mathematics
- Science and Math Defeated
- Mohamed El Naschie
- Vortex-based math
- Numerology
- Fun:Pi
External links
References
- Geometry forums — Squaring the Circle
- Steve Dutch — Why Trisecting the Angle is Impossible
- https://sv.wikipedia.org/wiki/J%CF%89-metoden
- The Indiana Pi Bill, 1897
- This appears to be a fundamental arithmetical error of rounding, as 3.1 would be a grotesquely imprecise approximation to π. They couldn't even get rounding right.
- Any system with enough axioms to prove useful things is covered, really.
- John W. Dawson — The Reception of Gödel’s Incompleteness Theorems
- Godel Unknotted by TheLastWordSword (Revision as of 20:09, April 6, 2015) Phillip A. Batz Wiki.
- Science Forums — Cantor's Diagonal Slash Disproved
- Underwood Dudley — Mathematical cranks
- 0.999... § Infinite series and sequences
File:Wikipedia's W.svg - Clay Mathematics Institute — First Clay Mathematics Institute Millennium Prize Announced
- The Poincaré Conjecture
File:Wikipedia's W.svg was proven by Grigoriy Perelman in 2002/3. He declined the prize. - Proposed (dis)proofs of the Riemann Hypothesis
- Donald Knuth — Twenty Questions for Donald Knuth question 17