Prove DeMorgan's laws

21

Using the the ten inferences of the Natural Deduction System prove DeMorgan's laws.

The Rules of Natural Deduction

  • Negation Introduction: {(P → Q), (P → ¬Q)} ⊢ ¬P

  • Negation Elimination: {(¬P → Q), (¬P → ¬Q)} ⊢ P

  • And Introduction: {P, Q} ⊢ P ʌ Q

  • And Elimination: P ʌ Q ⊢ {P, Q}

  • Or Introduction: P ⊢ {(P ∨ Q),(Q ∨ P)}

  • Or Elimination: {(P ∨ Q), (P → R), (Q → R)} ⊢ R

  • Iff Introduction: {(P → Q), (Q → P)} ⊢ (P ≡ Q)

  • Iff Elimination: (P ≡ Q) ⊢ {(P → Q), (Q → P)}

  • If Introduction: (P ⊢ Q) ⊢ (P → Q)

  • If Elimination: {(P → Q), P} ⊢ Q

Proof structure

Each statement in your proof must be the result of one of the ten rules applied to some previously derived propositions (no circular logic) or an assumption (described below). Each rule operates across some propositions on the left hand side of the (logical consequence operator) and creates any number of propositions from the right hand side. The If Introduction works slightly differently from the rest of the operators (described in detail below). It operates across one statement that is the logical consequent of another.

Example 1

You have the following statements:

{(P → R), Q}

You may use And Introduction to make:

(P → R) ʌ Q

Example 2

You have the following statements:

{(P → R), P}

You may use If Elimination to make:

R

Example 3

You have the following statements:

(P ʌ Q)

You may use And Elimination to make:

P

or to make:

Q

Assumption Propagation

You may at any point assume any statement you wish. Any statement derived from these assumptions will be "reliant" on them. Statements will also be reliant on the assumptions their parent statements rely on. The only way to eliminate assumptions is by If Introduction. For If introduction you start with a Statement Q that is reliant on a statement P and end with (P → Q). The new statement is reliant on every assumption Q relies on except for assumption P. Your final statement should rely on no assumptions.

Specifics and scoring

You will construct one proof for each of DeMorgan's two laws using only the 10 inferences of the Natural Deduction Calculus.

The two rules are:

¬(P ∨ Q) ≡ ¬P ʌ ¬Q

¬(P ʌ Q) ≡ ¬P ∨ ¬Q

Your score is the number of inferences used plus the number of assumptions made. Your final statement should not rely on any assumptions (i.e. should be a theorem).

You are free to format your proof as you see fit.

You may carry over any Lemmas from one proof to another at no cost to score.

Example Proof

I will prove that (P and not(P)) implies Q

(Each bullet point is +1 point)

  • Assume not (Q)

  • Assume (P and not(P))

  • Using And Elim on (P and not(P)) derive {P, not(P)}

  • Use And Introduction on P and not(Q) to derive (P and not(Q))

  • Use And Elim on the statement just derived to make P

The new P proposition is different from the other one we derive earlier. Namely it is reliant on the assumptions not(Q) and (P and not(P)). Whereas the original statement was reliant only on (P and not(P)). This allows us to do:

  • If Introduction on P introducing not(Q) implies P (still reliant on the (P and not(P)) assumption)

  • Use And Introduction on not(P) and not(Q) (from step 3) to derive (not(P) and not(Q))

  • Use And Elim on the statement just derived to make not(P) (now reliant on not(Q))

  • If Introduction on the new not(P) introducing not(Q) implies not(P)

  • We will now use negation elimination on not(Q) implies not(P) and not(Q) implies P to derive Q

This Q is reliant only on the assumption (P and not(P)) so we can finish the proof with

  • If Introduction on Q to derive (P and not(P)) implies Q

This proof scores a total of 11.

Post Rock Garf Hunter

Posted 2016-10-16T08:13:41.340

Reputation: 55 382

7

Although the consensus on meta is clear, not everyone will have seen it yet, so this is just to highlight that proof golfing is on topic.

– trichoplax – 2016-10-16T08:33:01.313

2I think you should explain the structure of proofs and (the symbol also doesn't render for me on mobile). – xnor – 2016-10-16T08:39:02.603

3I think the explanations definitely help. What I'd find most useful would a worked and scored example of some simple proof that includes If-Introduction and assumptions, preferably nested. Maybe of contrapositive? – xnor – 2016-10-16T09:02:35.900

1In your example, I believe that the first two assumptions would have to be flipped; nowhere does it state that (P ⊢ (Q ⊢ R)) ⊢ (Q ⊢ (P ⊢ R)) (in this instance, ¬Q ⊢ ((P ʌ ¬P) ⊢ P) to (P ʌ ¬P) ⊢ (¬Q ⊢ P) was used). – LegionMammal978 – 2016-10-16T12:40:23.277

@LegionMammal978 the order of assumptions does not matter. The ⊢ operator is transitive. – Post Rock Garf Hunter – 2016-10-16T16:29:00.397

@Dennis will just make a Jelly builtin and have an infinite score haha. – Buffer Over Read – 2016-10-17T00:58:54.390

@TheBitByte Dennis should definitely make a Jelly builtin, but given the rules of the site he will have no score – edc65 – 2016-10-17T07:03:46.777

@WheatWizard You should still state that (P ⊢ (Q ⊢ R)) ⊢ (Q ⊢ (P ⊢ R)); additionally, I can't see how you got from step 6 to step 7. – LegionMammal978 – 2016-10-17T11:33:52.343

@LegionMammal978 I see your confusion. Step 7 does not follow from step 6 but rather follows from step 3. The proof is not quite linear. I will try to make this clearer – Post Rock Garf Hunter – 2016-10-17T15:21:43.500

I don't understand this reliance business. For example, @feersum's proof contains assume ~P { Q ^ Q; Q; } ~P -> Q but isn't Q supposed to rely on P for this to work? – Neil – 2016-10-19T09:12:05.970

@Neil I am not sure I understand feersum's proof myself. I am working on verifying it right now but I am having particular trouble with the portion you mention. However it is perfectly fine for something to rest on two contradictory assumptions. – Post Rock Garf Hunter – 2016-10-19T11:48:07.217

The first 2 are equivalent... because if i have one axiom formula f(a,b,c) where a b c are proposition variables than f(not a, b, c) f(a, not b, c)... f(not a, not b, not c) should be true too... – RosLuP – 2016-10-19T15:20:42.967

@RosLuP While your assertion is in fact true it is actually a consequent of the second axiom. While these two axioms may seem trivially redundant, since we do not have ~(~P) -> P and cannot prove it without axiom 2 the two axioms are in fact both required. – Post Rock Garf Hunter – 2016-10-19T15:24:57.483

1Are you allowed to prove multiple things in a single "assumption context", and then extract multiple implication statements, to save on how many "assumption lines" are needed? e.g. (assume (P/\~P); P,~P by and-elim; (assume ~Q; P by assumption; ~P by assumption); ~Q->P by impl-intro; ~Q->~P by impl-intro; Q by neg-elim); P/\~P->Q by impl-intro to get a score of 9? – Daniel Schepler – 2017-10-02T22:37:09.213

Also, I don't see an "assumption" rule, so I'm not sure how you actually prove P -> P. Would a proof like 1. assume P; 2. P -> P by impl-intro, 1, 1 be acceptable? Or, to prove P -> P /\ P: would 1. { assume P; 2. P /\ P by and-intro, 1, 1 }; 3. P -> P /\ P by impl-intro, 1, 2 be acceptable? – Daniel Schepler – 2017-10-03T00:00:51.840

Answers

6

Score: 81

Each line should be worth 1 point. The De Morgan's laws are found at statements (3) and (6). Labels in brackets denote the previous statement(s) a line depends on if they are not immediately preceding.

(a) assume P {
    (aa) P ^ P
    (ab) P
    (ac) P v Q
} (a1) P -> P
  (a2) P -> P v Q
(1) assume ~P ^ ~Q {
    (1a) assume P v Q {
        (1aa) assume Q {
            (1aaa) assume ~P {
                (1aaaa) Q ^ Q [1aa]
                (1aaab) Q
                (1aaac) ~Q [1]
            } (1aaa1) ~P -> Q
              (1aaa2) ~P -> ~Q
            (1aab) P
        } (1aa1) Q -> P
        P [1a, a1, 1aa1]
        ~P [1]
    } (1a1) P v Q -> P
      (1a2) P v Q -> ~P
    (1b) ~(P v Q)
} (11) ~P ^ ~Q -> ~(P v Q)
(2) assume ~(P v Q) {
    (2a) ~(P v Q) ^ ~(P v Q)
    (2b) assume P {
        (2aa) ~(P v Q) [2a]
    } (2b1) P -> ~(P v Q)
    (2c) ~P [a2, 2b1]
    (2d) assume Q {
        (2da) ~(P v Q) [2a]
        (2db) P v Q
    } (2d1) Q -> ~(P v Q)
      (2d2) Q -> P v Q
    (2e) ~Q
    (2f) ~P ^ ~Q
} (21) ~(P v Q) -> ~P ^ ~Q
(3) ~(P v Q) == ~P ^ ~Q [11, 21]
(4) assume ~P v ~Q {
    (4a) assume ~P {
        (4aa) assume P ^ Q {
            (4aaa) P
            (4aab) ~P ^ ~P [4a]
            (4aac) ~P
        } (4aa1) P ^ Q -> P
          (4aa2) P ^ Q -> ~P
        (4ab) ~(P ^ Q)
    } (4a1) ~P -> ~(P ^ Q)
    (4b) assume ~Q {
        (4ba) assume P ^ Q {
            (4baa) Q
            (4bab) ~Q ^ ~Q [4b]
            (4bac) ~Q
        } (4ba1) P ^ Q -> Q
          (4ba2) P ^ Q -> ~Q
        (4bb) ~(P ^ Q)
    } (4b1) ~P -> ~(P ^ Q)
    (4c) ~(P ^ Q) [4, 4a1, 4b1]
} (41) ~P v ~Q -> ~(P ^ Q) 
(5) assume ~(P ^ Q) {
    (5a) assume ~(~P v ~Q) {
        (5aa) ~(~P v ~Q) ^ ~(P ^ Q) [5, 5a]
        (5ab) assume ~P {
            (5aba) ~P v ~Q
            (5abb) ~(~P v ~Q) [5aa]
        } (5ab1) ~P -> ~P v ~Q
          (5ab2) ~P -> ~(~P v ~Q)
        (5ac) P
        (5ad) assume ~Q {
            (5ada) ~P v ~Q
            (5adb) ~(~P v ~Q) [5aa]
        } (5ad1) ~Q -> ~P v ~Q
          (5ad2) ~Q -> ~(~P v ~Q)
        (5ae) Q
        (5af) P ^ Q [5ac, 5ae]
        (5ag) ~(P ^ Q) [5aa]
    } (5a1) ~(~P v ~Q) -> P ^ Q
      (5a2) ~(~P v ~Q) -> ~(P ^ Q)
    (5b) ~P v ~Q
} (51) ~(P ^ Q) -> ~P v ~Q
(6) ~(P ^ Q) == ~P v ~Q [41, 51]

feersum

Posted 2016-10-16T08:13:41.340

Reputation: 29 566

4

Score: 59

Explanation

I'll use a tree like structure for the proof as I find this style quite readable. Each line is annotated by the count of used rules, for example the "Example 1" in the challenge would be represented as this tree (growing bottom to top):

AIntro

Note the unknown counts A,B and also the assumption Γ - so this is no theorem. To demonstrate how to prove a theorem, let us assume A and use an Or-introduction as follows:

OIntro

Now this is still depending on the assumption A but we can derive a theorem by applying an If-introduction:

IIntro

So we were able to derive the theorem ⊢ A → (AB) in a total of 3 steps (1 assumption & 2 applied rules).

With this we can go on to prove a few new rules that help us to prove DeMorgan's Laws.

Additional Rules

Let's first derive the Principle of Explosion and denote it with PE in further proofs:

PE

From this we derive another form of it: A ⊢ ¬AX - we'll call it CPE:

PE

We'll need another one where the negated term (¬) is an assumption and refer to it as CPE-:

NCPE

From the two rules we just derived (CPE and CPE-), we can derive an important rule Double Negation:

DN

The next thing to do is to prove something like Modus Tollens - hence MT:

MT

Lemmas

Lemma A

Lemma A1

We'll need the following rule two times:

LA1

Lemma A

By instantiating the just proved lemma twice we can show one direction of an equivalence, we will need it in the final proof:

LA

Lemma B

In order to show another direction, we'll need to do two times some quite repetitive stuff (for both arguments A and B in (AB)) - this means here I could possibly golf the proof further..

Lemma B1'

LB1_

Lemma B1

LB1

Lemma B2'

LB2_

Lemma B2

LB2

Lemma B

Finally the conclusion of B1 and B2:

LB

Actual proof

Once we proved these two statements:

  • Lemma A: ⊢ (AB) → ¬(¬A ʌ ¬B)
  • Lemma B: ⊢ ¬(AB) → (¬A ʌ ¬B)

We can prove the first equivalence (¬(AB) ≡ ¬A ʌ ¬B)) as follows:

P1

And with the just proved rule together with Iff-Elimination we can prove the second equivalence too:

P2

Not sure about the score - if I did it right, let me know if there's something wrong.

Explanation

Source

If somebody wants the tex source (needs mathpartir):

In the following a rule \textbf{XYZ'} will denote the rule XYZ's second last
step, for example \textbf{PE'} will be $ A \land \lnot A \vdash X $.

\section*{Principle of Explosion [PE]}

\begin{mathpar}
  \inferrule*[Left=$\to$-Intro,Right=10]
    {\inferrule*[Left=$\lnot$-Elim,Right=9]
      {\inferrule*[Left=$\to$-Intro,Right=4]
        {\inferrule*[Left=$\land$-Elim,Right=3]
          {\inferrule*[Left=Axiom,Right=2]
            { }
            { A \land \lnot A, \lnot X \vdash A \land \lnot A }
          }
          { A \land \lnot A, \lnot X \vdash A }
        }
        { A \land \lnot A \vdash \lnot X \to A } \\
       \inferrule*[Right=$\to$-Intro,Left=4]
          {\inferrule*[Right=$\land$-Elim,Left=3]
            {\inferrule*[Right=Axiom,Left=2]
              { }
              { A \land \lnot A, \lnot X \vdash A \land \lnot A }
            }
            { A \land \lnot A, \lnot X \vdash \lnot A }
          }
        { A \land \lnot A \vdash \lnot X \to \lnot A }
      }
      { A \land \lnot A \vdash X }
    }
    { \vdash (A \land \lnot A) \to X }
\end{mathpar}

\section*{Conditioned PE [CPE]}

\begin{mathpar}
  \inferrule*[Left=$\to$-Intro,Right=5]
  {\inferrule*[Left=$\to$-Elim,Right=4]
    {\inferrule*[Left=$\land$-Intro,Right=3]
      {\inferrule*[Left=Axiom,Right=1]
        { } { A \vdash A } \\
       \inferrule*[Right=Axiom,Left=1]
        { } { \lnot A \vdash \lnot A }
      }
      { A, \lnot A \vdash A \land \lnot A } \\
     \inferrule*[Right=PE,Left=0]
      { } { \vdash (A \land \lnot A) \to X }
    }
    { A, \lnot A \vdash X }
  }
  { A \vdash \lnot A \to X }
\end{mathpar}

to get \textbf{CPE$^-$}:

\begin{mathpar}
  \inferrule*[Left=$\to$-Intro,Right=1]
    {\inferrule*[Left=CPE',Right=0]
      { }
      { A, \lnot A \vdash X }
    }
    { \lnot A \vdash A \to X }
\end{mathpar}

\section*{Double Negation [DN]}

\begin{mathpar}
  \inferrule*[Left=$\equiv$-Intro,Right=5]
    {\inferrule*[Left=$\to$-Intro,Right=2]
      {\inferrule*[Left=$\lnot$-Elim,Right=1]
        {\inferrule*[Left=CPE$^-$,Right=0]
          { }
          { \lnot\lnot A \vdash \lnot A \to X } \\
          \inferrule*[Right=CPE$^-$,Left=0]
          { }
          { \lnot\lnot A \vdash \lnot A \to \lnot X }
        }
        { \lnot\lnot A \vdash A }
      }
      { \vdash \lnot\lnot A \to A } \\ \\ \\
      \inferrule*[Left=$\to$-Intro,Right=2]
        {\inferrule*[Left=$\lnot$-Intro,Right=1]
          {\inferrule*[Left=CPE,Right=0]
            { }
            {  A \vdash \lnot A \to X } \\
            \inferrule*[Right=CPE,Left=0]
            { }
            { A \vdash \lnot A \to \lnot X }
          }
          { A \vdash \lnot\lnot A }
        }
        { \vdash A \to \lnot\lnot A }
    }
    { \vdash \lnot\lnot A \equiv A  }
\end{mathpar}

\section*{Modus Tollens [MT]}

\begin{mathpar}
  \inferrule*[Left=$\to$-Intro,Right=6]
    {\inferrule*[Left=$\lnot$-Intro,Right=5]
      {\inferrule*[Left=Axiom,Right=1]
       { }
       { A \to \lnot B \vdash A \to \lnot B } \\
       \inferrule*[Right=$\to$-Intro,Left=3]
         {\inferrule*[Right=Axiom,Left=2]
           { }
           { A, B \vdash B }
         }
         { B \vdash A \to B }
       }
      { A \to \lnot B, B \vdash \lnot A }
    }
    { A \to \lnot B \vdash B \to \lnot A }
\end{mathpar}

\section*{Lemma A}

\textbf{Lemma A1}:

\begin{mathpar}
  \inferrule*[Left=$\to$-Intro,Right=9]
    {\inferrule*[Left=$\lor$-Elim,Right=8]
       { \inferrule*[Left=CPE,Right=3]
          {\inferrule*[Left=$\land$-Elim,Right=2]
            {\inferrule*[Left=Axiom,Right=1]
              { }
              { A \land B \vdash A \land B }
            }
            { A \land B \vdash B}
          }
          { A \land B \vdash \lnot B \to X } \\
         \inferrule*[Right=CPE,Left=3]
          {\inferrule*[Right=$\land$-Elim,Left=2]
            {\inferrule*[Right=Axiom,Left=1]
              { }
              { A \land B \vdash A \land B }
            }
            { A \land B \vdash A }
          }
          { A \land B \vdash \lnot A \to X } \\ \\ \\
         \inferrule*[Right=Axiom,Left=1]
          { }
          { \lnot A \lor \lnot B \vdash \lnot A \lor \lnot B }
       }
       { A \land B, \lnot A \lor \lnot B \vdash X }
    }
    { \lnot A \lor \lnot B \vdash (A \land B) \to X }
\end{mathpar}

\textbf{Lemma A}:

ბიმო

Posted 2016-10-16T08:13:41.340

Reputation: 15 345

1As far as I can tell, the natural deduction proof system here does not allow for proving a statement once with generic proposition variables, and then instantiating it. So, each time you have a different instantiation of one of your lemmas in terms of the variables P and Q, you would have to count its steps separately in the final total. (In other words, the proof system doesn't directly allow proving "second-order" lemmas like "for all propositions A and B, A/\B -> B/\A" and then using it to prove both P/\(Q/\R) -> (Q/\R)/\P and (P/\Q)/\R -> R/\(P/\Q).) – Daniel Schepler – 2017-10-03T00:07:00.500

@DanielSchepler: Yes but there are no circular dependencies and in the rule it states You may carry over any Lemmas from one proof to another at no cost to score., so that'll be fine. Edit: In fact if that wasn't allowed I'm certain this question would only have one eligible answer. – ბიმო – 2017-10-03T00:57:08.503

I was interpreting that as meaning simply that you could have some common proofs of a set of concrete formulas shared between the proofs of the two final statements. – Daniel Schepler – 2017-10-03T01:13:15.007

1

Score: 65

The de Morgan laws are line 30 and line 65.

(I haven't made any particular effort to golf this, for example to see if there are some repeated proofs that could be abstracted out at the beginning.)

 1. assume ~(P\/Q)
 2.   assume P
 3.     P\/Q  by or-introl, 2
 4.   P -> P\/Q  by impl-intro, 2, 3
 5.   P -> ~(P\/Q)  by impl-intro, 2, 1
 6.   ~P  by neg-intro, 4, 5
 7.   assume Q
 8.     P\/Q  by or-intror, 7
 9.   Q -> P\/Q  by impl-intro, 7, 8
10.   Q -> ~(P\/Q) by impl-intro, 7, 1
11.   ~Q  by neg-intro, 9, 10
12.   ~P /\ ~Q  by and-intro, 6, 11
13. ~(P\/Q) -> ~P/\~Q  by impl-intro, 1, 12
14. assume ~P /\ ~Q
15.   ~P, ~Q  by and-elim, 14
16.   assume P \/ Q
17.     assume P
18.     P -> P  by impl-intro, 17, 17
19.     assume Q
20.       assume ~P
21.       ~P -> Q  by impl-intro, 20, 19
22.       ~P -> ~Q  by impl-intro, 20, 15
23.       P  by neg-elim, 21, 22
24.     Q -> P  by impl-intro, 19, 23
25.     P  by or-elim, 16, 18, 24
26.   P\/Q -> P  by impl-elim, 16, 25
27.   P\/Q -> ~P  by impl-elim, 16, 15
28.   ~(P\/Q)  by neg-intro, 26, 27
29. ~P/\~Q -> ~(P\/Q)  by impl-intro, 14, 28
30. ~(P\/Q) <-> ~P/\~Q  by iff-intro, 13, 29
31. assume ~(P/\Q)
32.   assume ~(~P\/~Q)
33.     assume ~P
34.       ~P\/~Q  by or-introl, 33
35.     ~P -> ~P\/~Q  by impl-intro, 33, 34
36.     ~P -> ~(~P\/~Q)  by impl-intro, 33, 32
37.     P  by neg-elim, 35, 36
38.     assume ~Q
39.       ~P\/~Q  by or-intror, 38
40.     ~Q -> ~P\/~Q  by impl-intro, 38, 39
41.     ~Q -> ~(~P\/~Q)  by impl-intro, 38, 32
42.     Q  by neg-elim, 40, 41
43.     P /\ Q  by and-intro, 37, 42
44.   ~(~P\/~Q) -> P /\ Q  by impl-intro, 32, 43
45.   ~(~P\/~Q) -> ~(P /\ Q)  by impl-intro, 32, 31
46.   ~P \/ ~Q  by neg-elim, 44, 45
47. ~(P/\Q) -> ~P\/~Q  by impl-intro, 31, 46
48. assume ~P\/~Q
49.   assume ~P
50.     assume P/\Q
51.       P, Q  by and-elim, 50
52.     P/\Q -> P  by impl-intro, 50, 51
53.     P/\Q -> ~P  by impl-intro, 50, 49
54.     ~(P/\Q)  by neg-intro, 52, 53
55.   ~P -> ~(P/\Q)  by impl-intro, 49, 54
56.   assume ~Q
57.     assume P/\Q
58.       P, Q  by and-elim, 57
59.     P/\Q -> Q  by impl-intro, 57, 58
60.     P/\Q -> ~Q  by impl-intro, 57, 56
61.     ~(P/\Q)  by neg-intro, 59, 60
62.   ~Q -> ~(P/\Q)  by impl-intro, 56, 61
63.   ~(P/\Q)  by or-elim, 48, 55, 62
64. ~P\/~Q -> ~(P/\Q)  by impl-intro, 48, 63
65. ~(P/\Q) <-> ~P\/~Q  by iff-intro, 47, 64

Daniel Schepler

Posted 2016-10-16T08:13:41.340

Reputation: 1 001