World's most popular travel blog for travel bloggers.

# [Solved]: Why do they say A, B, C, ... implies M instead of A∧B∧ C∧ ... implies M?

, ,
Problem Detail:

The conclusion follows from all the premises. It seems to me that this means that conclusion from the conjuntion of the premises. Right? Why does logic uses comma instead?

It's a distinction between language and metalanguage.

$A, B \vdash M$ says: if I have both $A$ and $B$ amongst my hypotheses, then I can deduce $M$.
$A \wedge B \vdash M$ says: if I have $A \wedge B$ amongst my hypotheses, then I can deduce $M$.
The fact that the two are equivalent comes from the definition of the operator $\wedge$.

I could define a new operator $?$ with these deduction rules (using the sequent calculus formulation): $$\frac{A \vdash M}{A ? B \vdash M} \qquad \frac{B \vdash M}{A ? B \vdash M} \qquad \frac{\vdash A, M \quad \vdash B,M}{\vdash A?B,M}$$

This new operator has the same rules as $\wedge$ (in the sequent calculus), so I could substitute $?$ and $\wedge$ in a true judgement and still get a true judgement, and vice versa. Nonetheless, they are syntactically speaking two different propositions.

It takes a while to internalize the difference between the language and the metalanguage, especially in the beginning when you only study a logic which models our everyday reasoning. There are logics such as intuitionistic logic and linear logic which diverge further, and while these logics (especially linear logic) are more advanced, they are in some ways easier to handle because there is a clearer separation between the logic and the meta-logic.

I'll briefly discuss "and" in linear logic, to illustrate this point. To put it in a nutshell, in everyday logic, if you have a hypothesis, you can use it at multiple points in a proof. In linear logic, a hypothesis can only be used once, unless it's marked as reusable. Linear logic can thus model processes that aren't proofs as you think of them, but more physical processes, such as "if I have a cake before eating it, then I have no cake after eating it". Linear logic has two conjunction operators. Both say: if I have a process that produces an $A$, and I have a process that produces a $B$, then I have a process that produces an $A$ "and" a $B$. They differ in how the processes are combined:

• Multiplicative conjunction puts the two processes next to each other: given $\vdash \Gamma, A$ and $\vdash \Delta, B$ (a process that produces an $A$ and some more stuff, and a process that produces an $B$ and some more stuff), I can combine them to make $A \otimes B$ and the combined output of the two processes: $\vdash \Gamma, \Delta, A \otimes B$. For example, if I make a plate of spaghetti and an apple pie, I can eat a plate of spaghetti and an apple pie.
• Additive conjunction runs the two processes together: given $\vdash \Gamma, A$ and $\vdash \Gamma, B$ (I can run the process and produce $\Gamma$ plus either $A$ or $B$), I can run that same process and produce both: $\vdash \Gamma, A \& B$. For example, if I can spend \$10 to get a plate of spaghetti or I can spend \$10 to get an apple pie, then \$10 gets me the choice of spaghetti or pie (but it doesn't get me$A \otimes B\$, where I have both spaghetti and pie).