Skip to main content

Part of the book series: Outstanding Contributions to Logic ((OCTR,volume 22))

Abstract

Our goal is to articulate a clear rationale for relevance-sensitive propositional logic. The method: truth-trees. Familiar decomposition rules for truth-functional connectives, accompanied by novel ones for the for the arrow, together with a recursive rule, generate a set of ‘acceptable’ formulae that properly contains all theorems of the well-known system R and is closed under substitution, conjunction, and detachment. We conjecture that it satisfies the crucial letter-sharing condition.

Second ReaderA. Avron, Tel-Aviv University

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Anderson, A. R., & Belnap, Jr., N. D. (1975). Entailment. The logic of relevance and necessity (Vol. 1). Princeton: Princeton University Press.

    Google Scholar 

  • Anderson, A. R., Belnap, Jr., N. D., & Dunn, J. M. (1992). Entailment. The logic of relevance and necessity (Vol. 2). Princeton: Princeton University Press.

    Google Scholar 

  • Avron, A. (1992). Whither relevance logic? J. Philos. Logic, 21(3), 243–281.

    Article  Google Scholar 

  • Avron, A. (2014). What is relevance logic? Ann. Pure Appl. Logic, 165(1), 26–48.

    Article  Google Scholar 

  • Avron, A., Arieli, O., & Zamansky, A. (2018). Theory of effective propositional paraconsistent logics (Vol. 75). Studies in logic. London: College Publications.

    Google Scholar 

  • Bimbó, K., & Dunn, J. M. (2018). Larisa Maksimova’s early contributions to relevance logic. In Larisa Maksimova on implication, interpolation, and definability (Vol. 15, pp. 33–60). Outstanding contributions to logic. Berlin: Springer.

    Google Scholar 

  • Bimbó, K., Dunn, J. M., & Ferenz, N. (2018). Two manuscripts, one by Routley, one by Meyer: the origins of the Routley-Meyer semantics for relevance logics. Australas. J. Log., 15(2), 171–209.

    Article  Google Scholar 

  • Bloesch, A. (1993). A tableau style proof system for two paraconsistent logics. Notre Dame J. Formal Logic, 34(2), 295–301.

    Article  Google Scholar 

  • Brady, R. (Ed.). (2003). Relevance logics and their rivals (Vol. 2). Farnham: Ashgate Publishing.

    Google Scholar 

  • Brady, R. T. (2006). Normalized natural deduction systems for some relevant logics. I. The logic DW. The Journal of Symbolic Logic, 71(1), 35–66.

    Google Scholar 

  • Burgess, J. P. (2009). Philosophical logic. Princeton foundations of contemporary philosophy. Princeton: Princeton University Press.

    Google Scholar 

  • D’Agostino, M., Gabbay, D., & Broda, K. (1999). Tableau methods for substructural logics. In Handbook of tableau methods (pp. 397–467). Dordrecht: Kluwer Academic Publishers.

    Google Scholar 

  • Dunn, J. (1986). Relevance logic and entailment. In D. Gabbay, & F. Guenthner (Eds.), Handbook of philosophical logic (1st ed., Vol. 3, pp. 117–224). Dordrecht: Reidel

    Google Scholar 

  • Dunn, J., & Restall, G. (2002). Relevance logic. In D. Gabbay, & F. Guenthner (Eds.), Handbook of philosophical logic (2nd ed., Vol. 6, pp. 1–128). Amsterdam: Kluwer.

    Google Scholar 

  • Dunn, J. M. (1976). Intuitive semantics for first-degree entailments and ‘coupled trees’. Philos. Studies, 29(3), 149–168.

    Article  Google Scholar 

  • Harrison, J. (2009). Without loss of generality. In Theorem proving in higher order logics (Vol. 5674, pp. 43–59). Lecture notes in computer science. Berlin: Springer.

    Google Scholar 

  • Hughes, G., & Cresswell, M. (1996). A New Introduction to Modal Logic. London: Routledge.

    Book  Google Scholar 

  • Humberstone, L. (2011). The connectives. Cambridge: MIT Press.

    Google Scholar 

  • Incurvati, L. (2020). Conceptions of set and the foundations of mathematics. Cambridge: Cambridge University Press.

    Google Scholar 

  • Loveland, D. W., Hodel, R. E., & Sterrett, S. G. (2014). Three views of logic. Mathematics, philosophy, and computer science. Princeton: Princeton University Press.

    Google Scholar 

  • Maddux, R. D. (2010). Relevance logic and the calculus of relations. Rev. Symb. Log., 3(1), 41–70.

    Article  Google Scholar 

  • Makinson, D. (2005). Logical friendliness and sympathy. In Logica Universalis, pages 191–205. Birkhäuser, Basel.

    Google Scholar 

  • Makinson, D. (2014). Relevance logic as a conservative extension of classical logic. In David Makinson on classical methods for non-classical problems (Vol. 3, pp. 383–398). Outstanding contributions to logic. Dordrecht: Springer.

    Google Scholar 

  • Makinson, D. (2017). Relevance via decomposition: a project, some results, an open question. Australas. J. Log., 14(3), 356–377.

    Google Scholar 

  • Makinson, D. (2020). Sets, logic and maths for computing (3rd ed.). Undergraduate topics in computer science. Berlin: Springer.

    Google Scholar 

  • Mares, E. (2012). Relevance logic. Stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/logic-relevance/.

  • McRobbie, M. (1977). A tableau system for positive relevant implication (abstract). Bulletin of the Section of Logic (Polish Academy of Sciences, Institute of Philosophy and Sociology), 6, 99–101. Relevance Logic Newsletter, 2, 99–101. Accessible at http://aal.ltumathstats.com/curios/relevance-logic-newsletter.

  • McRobbie, M. (1979). A proof-theoretic investigation of relevant and modal logics. PhD thesis, Australian National University.

    Google Scholar 

  • McRobbie, M., & Belnap, N. (1977). Relevant analytic tableaux (abstract). Relevance Logic Newsletter, 2, 46–49. Accessible at http://aal.ltumathstats.com/curios/relevance-logic-newsletter.

  • McRobbie, M. A. and Belnap, N. D. (1979). Relevant analytic tableaux. Studia Logica, 38(2), 187–200.

    Article  Google Scholar 

  • McRobbie, M. A., & Belnap, N. D. (1984). Proof tableau formulations of some first-order relevant orthologics. Bulletin of the Section of Logic (Polish Academy of Sciences, Institute of Philosophy and Sociology), 13(4), 233–240.

    Google Scholar 

  • Øgaard, T. (2019). Non-Boolean classical relevant logics. Synthese. https://doi.org/10.1007/s11229-019-02507-z.

  • Pabion, J.-F. (1979). Beth’s tableaux for relevant logic. Notre Dame J. Formal Logic, 20(4), 891–899.

    Article  Google Scholar 

  • Priest, G. (2008). An introduction to non-classical logic. From if to is (2nd ed.). Cambridge introductions to philosophy. Cambridge: Cambridge University Press.

    Google Scholar 

  • Routley, R. (2018). Semantic analysis of entailment and relevant implication: I. The Australasian Journal of Logic, 15(2), 210–279. Circulated privately 1970/71, transcribed by Nicholas Ferenz.

    Google Scholar 

  • Routley, R., Plumwood, V., Meyer, R. K., & Brady, R. T. (1982). Relevant logics and their rivals. Part I. Atascadero: Ridgeview Publishing Co.

    Google Scholar 

  • Schechter, E. (2005). Classical and nonclassical logics. Princeton, NJ: Princeton University Press.

    Book  Google Scholar 

  • Slaney, J. (1995). MaGIC, matrix generator for implication connectives, release 2.1. Technical report, Australian National University.

    Google Scholar 

  • Swirydowicz, K. (1999). There exist exactly two maximal strictly relevant extensions of the relevant logic \(R\). J. Symbolic Logic, 64(3), 1125–1154.

    Article  Google Scholar 

  • Tennant, N. (1979). Entailment and proofs. Proceedings of the Aristotelian Society, New Series, 179:167–189.

    Article  Google Scholar 

  • Urquhart, A. (1972). Semantics for relevance logics. The Journal of Symbolic Logic, 37:159–169.

    Article  Google Scholar 

  • Urquhart, A. (1984). The undecidability of entailment and relevant implication. J. Symbolic Logic, 49(4), 1059–1073.

    Article  Google Scholar 

  • Urquhart, A. (1989). What is relevant implication? In J. Norman, & R. Sylvan (Eds.), Directions in relevant logic (Vol. 1, pp. 167–174). Reason and argument. Amsterdam: Kluwer.

    Google Scholar 

  • Urquhart, A. (2016). Relevance logic: problems open and closed. Australas. J. Log., 13(1), 11–20.

    Article  Google Scholar 

  • van Benthem, J. (1983). Review of B.J. Copeland “On when a semantics is not a semantics: some reasons for disliking the Routley-Meyer semantics for relevance logic”. Journal of Symbolic Logic, 49, 994–995.

    Article  Google Scholar 

  • Veltman, F. (1985). Logics for conditionals. PhD thesis, University of Amsterdam.

    Google Scholar 

Download references

Acknowledgements

Thanks to Marcello d’Agostino, Lloyd Humberstone, Paul McNamara, Karl Schlechta and Alasdair Urquhart for comments on various drafts over several years; Michael McRobbie for kindly providing a copy of his dissertation; Tore Øgaard for checking some formulae with the programs MaGIC and Prover9; and students of LSE’s PH217/419 for challenging questions in the classroom. Special thanks to Arnon Avron, whose penetrating and constructive comments as a reader for this publication led to considerable improvements.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Makinson .

Editor information

Editors and Affiliations

Appendices

Appendices

The appendices enlarge on matters arising in the main text. They deal with the following five matters: (1) Verification of direct acceptability for the axioms of R; (2) Comparison of our treatment of disjunction and conjunction with that of familiar natural deduction systems; (3) Earlier attempts to develop tree/tableau procedures for relevance logic, notably by Belnap & McRobbie; (4) Some properties of acceptability when restricted to the language of negation and arrow; (5) Experience teaching the material to students.

2.1.1 Appendix 1: Direct Acceptability for Axioms of R

In this appendix we verify the acceptability of the axiom schemes of the relevance logic R, noted in Observation 2.3, with comments on those axioms as we go. The axiomatization considered is the standard one found in Anderson et al. (1992), page xxiv (also in Mares 2012, Appendix A), using the ‘basic’ connectives \(\lnot \), \(\wedge \), \(\vee \), \(\rightarrow \).

Some formulations of R in the literature also contain auxiliary primitives, notably non-classical two-place connectives of ‘fusion’ and ‘fission’ (known in the linear logic literature as multiplicative conjunction and disjunction) and/or a zero-ary connective (propositional constant) t, in an effort to smooth some of the wrinkles in the Routley–Meyer semantics. The present account in terms of acceptability has no need for auxiliary connectives: we treat fusion and fission of \(\phi \) with \(\psi \) as no more than abbreviations for \(\lnot \) (\(\phi \rightarrow \lnot \psi )\) and \(\lnot \) \(\phi \rightarrow \psi \) respectively, and we dispense entirely with the constant t which, in the view of the author (and, e.g., of Avron et al. 2018, Sect. 11.1.1) is contrary to the spirit of relevance logic.

For the first-degree axiom schemes of R, that is, those of the form \(\alpha \rightarrow \beta \) where neither antecedent or consequent contains arrows, it suffices to take a classical truth-tree with root r: \(\lnot \) (\(\alpha \rightarrow \beta )\), apply counter-case to get \(\alpha \), \(\lnot \) \(\beta \) then use the rules for the classical connectives \(\lnot \), \(\wedge \), \(\vee \) only, finally checking by inspection that every branch satisfies parity. That covers axiom schemes 1 (identity) \(\alpha \rightarrow \alpha \); 5 (\(\wedge \)-elimination) (\(\alpha \wedge \beta )\rightarrow \alpha \), (\(\alpha \wedge \beta )\rightarrow \beta \); 6 (\(\vee \)-introduction) \(\alpha \rightarrow (\alpha \vee \beta )\), \(\beta \rightarrow (\alpha \vee \beta )\); 9 (distribution) (\(\alpha \wedge (\beta \vee \gamma ))\) \(\rightarrow \)((\(\alpha \wedge \beta )\vee (\alpha \wedge \gamma ))\); and 11 (double negation elimination) \(\lnot \lnot \) \(\alpha \rightarrow \alpha \).

Of these first-degree schemes, the only one whose tree we write out in full is distribution. It deserves explicit attention because it fails under some other approaches to relevance logic (notably that of McRobbie and Belnap 1984, see Appendix 3) as well as causing headaches for the standard natural deduction approach (as discussed in Appendix 2). The tree for distribution in Fig. 2.9 is a familiar one. It has four branches, each with its crash-pair. There are just two critical nodes, and they are common to all four branches. Both critical nodes are in the trace of each of the four crash-pairs, so parity is satisfied.

Fig. 2.9
figure 9

Distribution

The remaining schemes of R are of higher degree. For each of them we exhibit a directly acceptable truth-tree, with comments.

Scheme 2 (suffixing): (\(\alpha \rightarrow \beta )\rightarrow \)((\(\beta \rightarrow \gamma )\rightarrow (\alpha \rightarrow \gamma ))\). This may be understood as an ‘exported’ or ‘unpacked’ (and hence strengthened) form of transitivity ((\(\alpha \rightarrow \beta )\wedge (\beta \rightarrow \gamma ))\) \(\rightarrow (\alpha \rightarrow \gamma )\) for the arrow. The unique branch contains a single crash-pair, with all six critical nodes in its trace, so parity is satisfied. The linearity of the tree is a consequence of the fact that the only connectives involved are \(\rightarrow , \lnot \), (cf. Appendix 4).

Fig. 2.10
figure 10

Suffixing

We can use Fig. 2.10 to illustrate a point made in Sect. 2.4.1. Substituting \(\alpha \) for \(\beta \), \(\gamma \) gives (\(\alpha \rightarrow \alpha )\rightarrow \)((\(\alpha \rightarrow \alpha )\rightarrow (\alpha \rightarrow \alpha ))\), with an acceptable tree likewise obtainable by the same substitution in the tree. Now, for suffixing, there is no ambiguity about how each node is obtained. For example, \(n_{8}\): \(\gamma \) is obtained from \(n_{7}\): \(\beta \) and \(n_{3}\): \(\beta \rightarrow \gamma \) by modus ponens. But in the tree for the substitution instance, \(n_{8}\): \(\alpha \) may be obtained in various ways—from \(n_{7}\): \(\alpha \) and \(n_{3}\): \(\alpha \rightarrow \alpha \) as before, but also from, say, \(n_{5}\): \(\alpha \) and \(n_{1}\): \(\alpha \rightarrow \alpha \). The former pattern of justification satisfies parity just as it did in the tree for suffixing, but the latter does not: the critical node \(n_{3}\): \(\alpha \rightarrow \alpha \) is no longer in the trace of the designated crash-pair {\(n_{6}\): \(\lnot \) \(\alpha \), \(n_{8}\): \(\alpha \)}although its partner \(n_{4}\): \(\lnot \) (\(\alpha \rightarrow \alpha )\) is. So, in this example, the identification of trace, satisfaction of parity, and status of the tree as directly acceptable or not, all depend on its justificational pattern, going beyond its bare tree structure, the formulae attached to nodes and the choice of crash-pairs.

Scheme 3 (assertion): \(\alpha \rightarrow \)((\(\alpha \rightarrow \beta )\rightarrow \beta )\). As for suffixing (above) and contraction (below), its only connectives are \(\lnot \), \(\rightarrow \) so it has only one branch. In the directly acceptable tree of Fig. 2.11, the crash-pair {\(n_{4}\), \( n_{5}\)}has all four critical nodes in its trace so parity is satisfied.

Fig. 2.11
figure 11

Assertion

Assertion may be seen as an ‘exported’ or ‘unpacked’ version of the formula (\(\alpha \wedge (\alpha \rightarrow \beta ))\) \(\rightarrow \beta \) expressing modus ponens (see Sect. 2.4.1). Another way of understanding it intuitively is as the result of permuting the antecedents of the trivial (\(\alpha \rightarrow \beta )\rightarrow (\alpha \rightarrow \beta )\). It is the only axiom from the list that is unprovable in the weaker axiom systems NR and E that seek to embody a composite requirement of relevance-and-necessity for the arrow (see, e.g., Sect. 28.1 of Anderson and Belnap 1975, Sect. 2.4 of Mares 2012). It is also the only one that is of the form \(\phi \rightarrow \psi \) where \(\phi \) contains no arrows but \(\psi \) has arrow as principal connective.

Scheme 4 (contraction): (\(\alpha \rightarrow (\alpha \rightarrow \beta ))\) \(\rightarrow (\alpha \rightarrow \beta )\). The crash-pair {\(n_{4}\), \(n_{6}\)}has all four critical nodes in its trace, so parity is satisfied (Fig. 2.12).

Fig. 2.12
figure 12

Contraction

An interesting feature of this tree, reflecting repetition of \(\alpha \) in the antecedent of the scheme itself, is that node \(n_{3}\): \(\alpha \) is used twice: first to get \(n_{5}\): \(\alpha \rightarrow \beta \) and then to get \(n_{6}\): \(\beta \). Another feature is that the unique branch contains a second crash-pair {\(n_{2}\), \(n_{5}\)}which, however, does not satisfy the parity condition since the critical node \(n_{4}\) is not in its trace although its partner \(n_{3}\) is.

Scheme 7 (\(\wedge \)-introduction, \(\wedge ^{+})\): {(\(\alpha \rightarrow \beta )\wedge (\alpha \rightarrow \gamma )\)}\(\rightarrow \){\(\alpha \rightarrow (\beta \wedge \gamma )\)}. Two branches each with its crash-pair, every critical node in the trace of each crash-pair.

Fig. 2.13
figure 13

\(\wedge \)-introduction

This is a second-degree scheme, where degree measures the maximum embedding of arrows within arrows, defined recursively in the obvious way. If we consider its unpacked third-degree version (\(\alpha \rightarrow \beta )\rightarrow \)((\(\alpha \rightarrow \gamma )\rightarrow (\alpha \rightarrow (\beta \wedge \gamma ))\), we find that while both branches in its truth-tree still crash, they do so without parity. The same happens for scheme 8 (\(\vee \)-elimination) below. This illustrates the well-known difference of power, for A as for many relevance-sensitive logics, between the classically equivalent formulae (\(\phi \wedge \psi )\rightarrow \theta \) and \(\phi \rightarrow (\psi \rightarrow \theta )\) (Fig. 2.13).

From our perspective, two factors appear to lie behind this difference of power. On the one hand, decomposing \(\lnot \) (\(\phi \rightarrow (\psi \rightarrow \theta ))\) gives rise to two applications of counter-case, thus two critical pairs risking failure of parity, while \(\lnot \) ((\(\phi \wedge \psi )\rightarrow \theta )\) produces only one critical pair. But that is not the whole story for, while the third-degree versions of \(\wedge \)-introduction and \(\vee \)-elimination are not directly acceptable (and, apparently, unacceptable), we have already seen that there are other unpacked third-degree schemes, namely, suffixing and assertion, that are directly acceptable just like their packed second-degree counterparts. The reason for this contrast seems to be that, for each of \(\wedge +\), \(\vee +\), decomposition creates a fork with a critical node used on one branch while its partner is used on the other, while for suffixing and assertion there are no forks to create such problems.

This example illustrates the way in which the decompositional approach can help explain similarities and differences between formulae that otherwise can be difficult to understand. Explanations can also be given in terms of relevance-sensitive natural deduction, but less transparently.

Scheme 8 (\(\vee \)-elimination): ((\(\alpha \rightarrow \gamma )\wedge (\beta \rightarrow \gamma ))\) \(\rightarrow \)((\(\alpha \vee \beta )\rightarrow \gamma )\). A directly acceptable tree for it is given in Fig. 2.14. The same comments may be made as for \(\wedge \)-introduction.

Fig. 2.14
figure 14

\(\vee \)-elimination

Scheme 10 (contraposition): (\(\alpha \rightarrow \lnot \beta )\rightarrow (\beta \rightarrow \lnot \alpha )\). This is the form used in the standard axiomatization of R, with other familiar forms of contraposition derivable there as theorems. In the tree of Fig. 2.15, the unique branch has crash-pair \(\{n_{3}, n_{6}\}\) and each of the four critical nodes is in its trace.

Fig. 2.15
figure 15

Contraposition

2.1.2 Appendix 2: Disjunction, Conjunction

This appendix compares the treatment of disjunction and conjunction in our truth-trees-with-parity, with the way they are handled in the usual natural deduction system for the relevance logic R. We assume some familiarity with the latter; for background see Anderson and Belnap (1975) or, for a textbook presentation, part 3 of Loveland et al. (2014).

It is convenient to begin with disjunction. Consider the formula (\(p\rightarrow (q\vee (p\rightarrow q))\))\(\rightarrow (p\rightarrow q)\) (skewed cases) which, as noted in Observation 2.5, is directly acceptable but not in R. We recall how an attempt to derive it by natural deduction for R fails. From the suppositions \(p\rightarrow (q\vee (p\rightarrow q))\) and p one infers \(q\vee (p\rightarrow q)\); one then makes two sub-proofs, the first supposing q to obtain q ipso facto, the second supposing \(p\rightarrow q\) and applying modus ponens to that with p, to get q again. Thus one of the sub-derivations appeals to the supposition p while the other does not, violating the special proviso on \(\vee ^{-}\) in natural deduction for R, that these sub-derivations must appeal to exactly the same suppositions (other than the two disjuncts themselves).

In contrast, while the truth-tree for (\(p\rightarrow (q\vee (p\rightarrow q))\))\(\rightarrow (p\rightarrow q)\) forks at its node labeled \(q\vee (p\rightarrow q)\), the parity constraint acts on each branch separately without comparison with the other branch and, as we saw in Observation 2.5, it is satisfied. We might say, roughly, that the parity condition is more generous toward disjunctive reasoning than is the ‘same suppositions’ constraint.

But, one may ask, isn’t it too generous? Can’t we build acceptable trees for mangle \(\alpha \rightarrow (\beta \rightarrow \alpha )\) and its instance mingle \(\alpha \rightarrow (\alpha \rightarrow \alpha )\) by simulating the notorious trick, legitimate in classical natural deduction, that the ‘same suppositions’ proviso of relevant natural deduction was designed to block? Recall that classically, for mangle, one may first suppose \(\alpha \), then suppose \(\beta \), use \(\vee ^{+}\) on the former to get \(\alpha \vee (\beta \rightarrow \alpha )\), then carry out sub-derivations with the two disjuncts as suppositions, that both get \(\alpha \), discharge those two suppositions by \(\vee ^{-}\), and finally apply conditional proof twice. For mingle, one does the same with \(\beta \) instantiated to \(\alpha \) throughout. In relevantized natural deduction, the derivation is blocked by the ‘same suppositions’ proviso on \(\vee ^{-}\) mentioned above, but the question is: can’t we imitate the classical procedure in an acceptable truth-tree?

Fig. 2.16
figure 16

Mingle: a devious tree failing parity

For direct acceptability, the answer is simple: we cannot transcribe the application of \(\vee ^{+}\) since our trees can only decompose, never compose. No disjunctions can be introduced into the direct decomposition trees for mangle or mingle, the trees do not fork, and the unique branch, exhibited in Fig. 2.5 of Sect. 2.5.2, fails parity.

For the more general notion of acceptability, the answer is a little more complex. With the recursive rule available, we can indeed simulate the natural deduction step \(\vee ^{+}\), but the crash-pair in one of the ensuing branches fails parity. In detail, the truth-tree in Fig. 2.16 for mingle is the same, up to node \(n_{4}\), as the one in Fig. 2.5 that failed for direct acceptability.

Turning now to conjunction, we can say that it too is treated more generously by truth-trees-with-parity than by the standard natural deduction system for R. For, on the one hand as we saw in Appendix 1, a familiar classical truth-tree for the distribution principle (\(\alpha \wedge (\beta \vee \gamma ))\) \(\rightarrow \)((\(\alpha \wedge \beta )\vee (\alpha \wedge \gamma ))\) happily satisfies parity. On the other hand, notoriously, distribution faces a difficulty in the natural deduction system for R because of a constraint that the system places on \(\wedge ^{+}\), which we briefly recall.

Just as unbridled \(\vee ^{-}\) can be used to ‘cheat’ its way around the ‘actual use’ constraint on conditional proof to establish mangle, so too can unrestrained \(\wedge ^{+}\). One may first suppose \(\alpha \), then suppose \(\beta \), use \(\wedge ^{+}\) on these to get \(\alpha \wedge \beta \), then \(\wedge ^{-}\) back to \(\alpha \) and finally apply conditional proof twice. The supposition \(\beta \) is used in the \(\wedge ^{+}\)/\(\wedge ^{-}\) detour and so the ‘actual use’ constraint on CP is satisfied. The trick cannot be simulated in our truth-trees. For direct acceptability, all rules decompose so that \(\wedge ^{+}\), like \(\vee ^{+}\), has no role. For indirect acceptability, the recursive rule takes only one node as input, never two (see the discussion in Sect. 2.7.2) so that \(\wedge ^{+}\) is not available.

To block this \(\wedge ^{+}\)/\(\wedge ^{-}\) ‘funny business’, Anderson & Belnap introduced a ‘same suppositions’ proviso on \(\wedge ^{+}\) echoing the one on \(\vee ^{-}\): one can infer a conjunction from its two conjuncts only if the two conjuncts depend on exactly the same suppositions. However, this proviso has the side effect of also blocking derivations of distribution, which is recuperated by postulating it as a primitive rule (and as an axiom in the Hilbertian presentation of R, as seen in Appendix 1). This rather ad hoc move has long been a source of unease; as remarked by D’Agostino et al. (1999), page 416, ‘the integration of this axiom into the proof-theory of R has always been a source of considerable difficulty’.

It is seldom noted that, in response to the problem, Anderson and Belnap (1975) (Sect. 27.2, page 348) mooted a modification of their definition of dependence in natural deduction. The essential idea behind the change is that when, in a derivation, one reaches \(\phi \vee \psi \) and creates auxiliary derivations headed, respectively, by \(\phi \), \(\psi \), those two formulae are not given fresh dependency labels but are taken to depend on the same suppositions as did \(\phi \vee \psi \). This has the effect of increasing the set of earlier suppositions that are treated as being used in each of its two auxiliary derivations, thus softening the bite of the provisos on \(\vee ^{-}\) and \(\wedge ^{+}\), whose formulations are left unchanged. Essentially the same proposal has been made by Urquhart (1989) (see also the brief remark in his Urquhart 2016), Dunn and Restall (2002), Brady (2006).

Anderson & Belnap observe that the suggested change rehabilitates the classical derivation of distribution, by allowing its applications of \(\wedge ^{+}\) in its two subordinate derivations. They also note that it yields the formula {(\(p\rightarrow (q\vee r))\) \(\wedge (q\rightarrow s)\)}\(\rightarrow (p\rightarrow (s\vee r))\), which we mentioned in Sect. 2.6 as being one of several that are directly acceptable but not in R. Nevertheless, not having at hand a matrix-like \(M_{0}\) (see Sect. 2.6) to validate all formulae that the change renders derivable while still invalidating explosion, they seem to have feared that it might yield too much and refrained from explicitly recommending it, unlike Urquhart (1989), page 169, who greeted it enthusiastically.

Although quite technical, the mooted revision raises two terminological issues, each with an interesting conceptual resonance. For Anderson & Belnap, the formulae \(\phi \), \(\psi \) heading the subordinate derivations are still suppositions (in their wording, hypotheses) although of a rather ghostly kind; on the other hand, Urquhart (1989) prefers not to see them as suppositions (in his wording, assumptions) at all, but as marking a split of the argument into two parts (we might say, into two ‘cases’). Again, Anderson & Belnap saw the revision as replacing one rule for disjunction elimination (in their notation, \(\vee \)E, where ‘E’ is for elimination) by another (\(\vee \)E\(^{s}\), where ‘s’ is for strong); for the present author, it is perhaps more transparent to see the move as keeping the same rule but with a new definition of dependence that weakens the impact of its proviso.

The suggested revision does not quite correspond to what is going on in our truth-trees, because the proviso on disjunctive roof in the natural deduction system continues to compare its subordinate derivations, whereas the parity condition is internal to each branch of a truth-tree, without comparing them. But the output does come closer to acceptability, at least in so far as the status of distribution is concerned. Perhaps the two outputs coincide, although the author suspects that there are still some subtle differences.

2.1.3 Appendix 3: Earlier Work on Truth-Trees for Relevance Logic

Perhaps the first attempt at articulating truth-trees (aka semantic tableaux) for relevance logics is contained in Sects. 6 and 7 of Routley (2018), a manuscript that was circulated privately to Meyer and some others in 1970/71, and not published until 2018.

In 1976 Dunn (1976) gave what appears to be the first published construction. Using pairs of trees, it works beautifully—but it covers only first-degree conditionals, i.e., formulae of the form \(\phi \rightarrow \psi \) where \(\phi \), \(\psi \) contain no arrows, and has resisted attempts to extend coverage much further.

The most intensive work on the topic was carried out by in his doctoral thesis McRobbie McRobbie (1979) (with parts in the abstracts McRobbie 1977, McRobbie and Belnap 1977), followed by the papers of McRobbie and Belnap (1979, 1984). Much of this is brought together in Anderson et al. (1992), Sect. 60; there is also a very clear exposition, with insightful discussion, in D’Agostino et al. (1999), Sect. 2.

Because of his limited coverage, Dunn (1976) did not consider the decomposition of unnegated arrows. They are treated by Routley, and by McRobbie & Belnap in the texts mentioned above; however, they are not decomposed by modus ponens, but by implicative forking: the tree forks into the consequent and the negation of the antecedent. These authors decompose negated arrows by the counter-case rule; but there is no recognition of the suppositional nature of its outputs when we leave the classical context for one that is sensitive to relevance. None of the authors bring recursion into the decomposition procedure.

Examining the publications of McRobbie and Belnap in more detail, we can note the following features.

  • The language of McRobbie’s 1977 abstract lacks negation, covering only the ‘positive’ connectives \(\rightarrow \), \(\wedge \), \(\vee \), \(\circ \) (fusion) and a truth-constant t.

  • In McRobbie and Belnap (1977, 1979), negation is present, but the language is still severely restricted, since arrow is the only other connective allowed. Of course, as remarked at the beginning of Appendix 1, intensional versions of conjunction and disjunction (aka fusion and fission) are definable from arrow with negation, but their ordinary (extensional) counterparts are not, so the restriction is a serious one. Despite the absence of ordinary conjunction and disjunction, the trees of McRobbie & Belnap can still have multiple branches, since unnegated arrows are decomposed by forking. The trees are subject to a dependency condition, requiring that every node is in the trace of some crash-pair on some branch. This contrasts with our parity constraint in several respects, highlighted by italics in the following statement of the latter: every branch contains some crash-pair such that for every critical node in the tree, if it is in the trace of the crash-pair then so is its partner.

  • Ordinary conjunction and disjunction are finally tackled by McRobbie and Belnap (1984). Standard classical decomposition rules for those connectives are accompanied by a rather cumbersome procedure of copying unused items from above a fork into each of the branches issuing from the fork. We do not give the details of the copying procedure; apart from the paper itself, there is a clear account in D’Agostino et al. (1999), Sect. 2.1 pp. 414–416. The important point to note is that the system goes into overkill, failing to validate distribution of \(\wedge \) over \(\vee \), just as did Anderson & Belnap’s system of natural deduction (see Appendix 2). In both contexts, repair can be carried out by hacking—just add distribution as an additional decomposition or deduction rule—but the patch is hardly satisfying.

From our perspective, the difficulties faced by McRobbie and Belnap have two main sources. One is in the decomposition of negated arrows, where there is no recognition of the special role, and suppositional status, of critical nodes, thus blocking articulation of a notion of branches crashing with parity. The other source is in the decomposition of unnegated arrows, where the authors stick with the classical step of implicative forking, rather than modus ponens.

Both these features appear to stem from a failure to take sufficiently seriously, in the context of truth-trees, the different logical powers of \(\phi \rightarrow \psi \) and \(\lnot \) \(\phi \vee \psi \). For negated arrows, counter-case does not track a relevantly admissible inference, but rather sets up a pair of wlog suppositions within the framework of an overall proof by reductio ad absurdum. For unnegated arrows, decomposition by implicative forking misses out on part of the force of the input.

Why were these obstacles not overcome in the years immediately following 1984? The author suspects that work on them was cut short by a tsunami in the relevance logic community, brought about by the Routley–Meyer possible-worlds semantics. Briefly suggested in the manuscript of Routley (2018) of 1970/1 and immediately developed by Routley and Meyer (see Bimbó et al. 2018) that semantics provided an exciting new technique to explore. Work on truth-trees for relevance logic went backstage, to appear from time to time in a minor role. Inspired by Kripke’s use of tableaux in his completeness proofs for modal logic in the 1960s, the following decades saw some attempts to render versions of the Routley–Meyer semantics for substructural logics computationally manageable by expressing some of the machinery of the models in terms of semantic tableaux; see, for example, papers of Pabion (1979), Bloesch (1993), Priest (2008).

The Routley–Meyer semantics also created an unfortunate methodological orientation. Its rules for evaluating formulae in a given world gave the impression that for relevance-sensitive logic not only the arrow but also negation, and perhaps even conjunction and disjunction, need to be treated non-classically. For negation, this vision was also encouraged by Dunn’s 1976 work on truth-tree pairs for first-degree arrow formulae, where the connective is treated in a four-valued way; while for conjunction and disjunction it was comforted by the ‘anti-cheating’ constraints placed on rules for those connectives in Anderson & Belnap’s natural deduction system. The general perspective was expressed in an influential textbook on modal logic by Hughes and Cresswell (1996), where it is said in passing that ‘… in fact relevance logics differ from all the logics we have so far considered in that they require a non-standard interpretation of the PC symbols, in particular of negation’ (page 205).

2.1.4 Appendix 4: Some Properties of the \(\lnot , \rightarrow \) Fragment

In this appendix we establish two desirable properties that hold for acceptability when it is defined on the \(\lnot \), \(\rightarrow \) fragment of our \(\lnot \), \(\wedge \), \(\vee \), \(\rightarrow \) language. Note that we are not just looking at those \(\lnot \), \(\rightarrow \) formulae that are acceptable in the original sense; such formulae may have acceptable trees in which the recursive rule is applied to introduce the connectives \(\wedge \), \(\vee \). We are considering a situation in which the recursive rule is also limited to the \(\lnot \), \(\rightarrow \) language. The arguments below go through essentially because in that language all truth-trees have a single branch, without forks.

The first result concerns the notion of parity. Recall from Sect. 2.4.1 that its definition requires that for every branch B and every critical node c on B, if c is in the trace of Z\(_{B}\) then its partner is also on B and in the trace of Z\(_{B}.\) This prompts the question, raised in Sect. 2.4.2, whether it makes a difference if we impose the apparently stronger condition that for every branch B and every critical node c on B, both it and its partner are in the trace of \(Z_{B}.\)

Observation 2.9

For the language of \(\lnot \), \(\rightarrow \): whenever a formula has an acceptable truth-tree then it has one in which its unique branch satisfies the strengthened parity condition.

Outline of Proof

Let \(\alpha \) be a formula in that language and let T be a truth-tree with root r: \(\lnot \) \(\alpha \) that is acceptable (with the recursive rule also restricted to that language). Then T has a unique branch B that may be identified with T itself. Form \(T^* = B^*\) by deleting all nodes in B that are not in the trace of the designated crash-pair \(Z_{B}\). Then \(T^*\) is also a single branch decomposition tree, with the same root as T and the same designated crash-pair \(Z_{B^*} = Z_B\). It remains to check that \(T^*\) satisfies the stronger version of parity. Suppose c is a critical node on \(B^*\). Then c was already on B and was in the trace of \(Z_{B}\). So, by the original definition of parity, applied to T, the partner \(c'\) of c is also on B and is in the trace of \(Z_{B}\). Hence \(c'\) is on \(B^*\) where it is in the trace of \(Z_{B^*}\).    \(\square \)

The author has not been able to settle the question whether Observation 2.9 continues to hold for the full language using \(\lnot \), \(\wedge \), \(\vee \), \(\rightarrow \). In that context, a tree may have more than one branch and the operation, for a given branch B, of simply deleting all nodes that are not in the trace of \(Z_{B}\), can create havoc, as may be appreciated by considering the example of distribution Appendix 1, Fig. 2.9. If we apply that deletion to, say, the leftmost branch of that figure, then we eliminate the nodes \(n_{4}\), \(n_{6}\) which produced forks, thus marooning nodes on the other side of each fork. Deleting the marooned nodes as well does not get one out of the difficulty, as it can destroy needed crash-pairs. In general, it does not seem possible to transform every acceptable tree into one in which every node on an arbitrary branch is in the trace of its designated crash-pair.

The second result of this appendix is that in the restricted language, acceptability coincides with being a theorem of R.

Observation 2.10

For the language of \(\lnot \), \(\rightarrow \): A formula is acceptable iff it is a theorem of R.

Proof Sketch

For the right-to-left direction, it is known that when a (\(\lnot \), \(\rightarrow )\)-formula is a theorem of R, then it has a derivation using only axioms that are (\(\lnot \), \(\rightarrow )\)-formulae with detachment the sole derivation rule (Anderson and Belnap 1975, Sect. 28.3.2, page 375). Those axioms continue to be directly acceptable in the restricted language (with the same verifications as in Observation 2.3), as does closure of acceptability under detachment (with the same verification as in Observation 2.7).

Left-to-right: Let \(\alpha \) be a formula in the language of \(\lnot \), \(\rightarrow \) with an acceptable truth-tree (where the recursive rule is restricted to such formulae), with root r: \(\lnot \) \(\alpha \). It has a unique branch B. We need to work delicately with sets X of nodes and multisets \(\Gamma \) of formulae.

Call a finite multiset \(\Gamma \) of formulae, with elements \(\alpha _{1}\),…\(\alpha _{n}\), R-inconsistent iff !(\(\Gamma ) \in \) R, where !(\(\Gamma ) = \alpha _{1}\rightarrow (\alpha _{2}\rightarrow \)(…(\(\alpha _{n-1}\rightarrow \lnot \alpha _{n})\)…); the order of the \(\alpha _{i}\) does not matter, since all permutations (with an appropriate contraposition when the last item is moved) are equivalent in R. Clearly, for the designated crash-pair \(\{z: \zeta , z': \lnot \zeta \}\) of B, the formula \(!\{\zeta , \lnot \zeta \}\) is in R. Also, R satisfies each of the following closure conditions corresponding to decomposition rules:

When \(\phi \rightarrow \)!(\(\Gamma ) \in \) R then \(\lnot \lnot \) \(\phi \rightarrow \)!(\(\Gamma ) \in \) R (for double negation elimination)

When \(\psi \rightarrow \)!(\(\Gamma ) \in \) R then \(\phi \rightarrow \)((\(\phi \rightarrow \psi )\rightarrow \)!(\(\Gamma ))\) \(\in \) R (for modus ponens)

When \(\phi \rightarrow (\lnot \psi \rightarrow !(\Gamma ) \in \) R then \(\lnot (\phi \rightarrow \psi )\rightarrow !(\Gamma ) \in \) R (for counter-case)

When \(\psi \rightarrow \)!(\(\Gamma ) \in \) R and \(\phi \rightarrow \psi \in \) R then \(\phi \rightarrow \)!(\(\Gamma ) \in \) R (for the recursive rule).

Using these closure conditions, one can follow R-inconsistency backwards from the crash-pair to sets X of nodes in its trace that are ever-closer to r, as measured by the mean of the set of integer distances of the elements of X from r. Note that to activate the closure condition corresponding to counter-case, the critical nodes \(\phi \), \(\lnot \) \(\psi \) must both be in the trace of the crash-pair, as assured by strengthened parity using Observation 2.9. This progression stops with \(X =\) {r: \(\lnot \) \(\alpha \)}, so \(\lnot \lnot \) \(\alpha \in \) R and thus \(\alpha \in \) R as desired.    \(\square \)

Corollary 2.10.1

Acceptability for the \(\lnot \), \(\rightarrow \) language is non-trivial (in the sense defined in Sect. 2.7) and satisfies letter-sharing.

Proof

By Observation 2.10, since R has those properties.    \(\square \)

2.1.5 Appendix 5: Pedagogical Remarks

Teaching this material to students of philosophy at LSE in 2017 through 2019 has provided the author with some classroom experience, shared in this appendix.

Honesty toward the students requires one to begin by frankly reviewing doubts about the whole enterprise of relevance logic (cf., e.g., Veltman 1985, Sect. I.2.1.4, Burgess 2009, Chap. 5, Makinson 2014, Sect. 6). There are three main grounds for scepticism.

In general terms, one should not confuse the logical question of whether an inference is valid, with the pragmatic one of whether it is wise to insist on pursuing the process rather than, say, going back to assumptions to revise or abandon some among them. Perhaps the very desire for relevance-sensitivity stems from the pragmatic concern rather than the logical one.

Second, the relevance or irrelevance of one proposition to another is largely a question of subject matter. As observed by van Benthem (1983), in ordinary discourse relevance is typically supplied by some background of assumptions that are taken for granted. Suppose, for example, that you are told: ‘If your cat has white, ginger, and black fur, then you can soon expect kittens’. Without any background information, the antecedent may appear quite irrelevant to the consequent. But given the information that, with rare exceptions, only female cats can have fur of three colors, plus the more widely known fact that, so long as they have not been neutered, female cats are usually quite fertile, there is indeed a substantive connection between the two parts of the conditional. Given this dependence on whatever subject matter happens to be in the speaker’s implicit assumptions, attempts to specify formal criteria for the truth of a relevant conditional may face major difficulties, and they will be inherited by any relevance-sensitive logic that is defined in terms of truth.

Finally, the history of the subject has revealed serious shortcomings in each of the main approaches in the literature—whether inelegance in formal implementation of a promising idea (the natural deduction account), poorly motivated and heteroclite axiom sets (the Hilbertian axiomatic approach) or difficulties in finding convincing intuitive motivation for formal devices (such as for the constraints imposed on three-place relations in the Meyer–Routley semantics). The area has acquired the reputation of a failed research project.

Students should also be warned that the subject has served as a trampoline for some quite startling positions in the philosophy of logic. But they can also be reassured that one may perfectly well be interested in relevance-sensitive logics as objects of study, without preaching that any of them should sweep classical logic aside, nor taking seriously doctrines of the existence of true contradictions, subsisting impossible worlds and so on, associated with the philosophy known as dialetheism. One may thus, with a free conscience, make unconstrained use of the resources of classical logic and pertinent parts of mathematics when reasoning about relevance-sensitivity, without constantly looking over one’s shoulder.

It is important that, from the beginning, students distinguish the requirement of relevance from the composite one of relevance-and-necessity. Following Descartes’ dictum, the present author’s personal view is that the composite condition should be broken down into its two parts, to be investigated separately until each is perfectly clear. Only then should their combination be attempted.

Optimally, to be able to appreciate what is going on, students should already have been exposed to truth-trees for classical propositional logic. The tweaks needed to incorporate a control for relevance then become understandable and even natural. Those who have spent time on natural deduction will need frequent reminding that the rules for direct acceptability always proceed by decomposition, never by composition—for example, we are not permitted to proceed from nodes labeled \(\phi \) and \(\psi \) to one labeled \(\phi \wedge \psi \), nor from one labeled \(\phi \) to another for \(\phi \vee \psi \). The only decomposition rule with more than one input formula is modus ponens; even for the recursive rule, we allow only one input, never more (Sect. 2.7.2, Fig. 2.8).

Once students have understood the limitations of direct acceptability and have been introduced to the recursive decomposition rule, another admission should frankly be made: acceptability currently faces the open problem of letter-sharing, for which a negative answer could well spell ruin. The author’s experience is that students appreciate being brought on stage in this way and are agreeably surprised to learn that there are still formal questions about classical propositional logic that have not been resolved.

In the classroom, the material is best presented through an analysis of examples, with definitions articulated formally only after their illustration in specific instances; this policy also guides the presentation in Chap. 11 of the textbook Makinson (2020). At the same time, it is helpful to encourage an ability to ‘smell’ examples—to articulate intuitions about them and conjecture whether they are acceptable before beginning the formal work of checking. At the beginning, students throw up their hands in bewilderment, unable to express suspicions or place confidence in guesses but, with sufficient practice, they can develop a fine-tuned sensitivity just as for natural deductions in classical logic and the undecidable system R (cf. remarks of Anderson and Belnap 1975, Sect. 28.1, page 350).

In any particular example, there are quite different two jobs to be done: construct a candidate tree and check it for parity. What is the best way of scheduling the two tasks? Two basic options arise. One can annotate the tree as one builds it, in such a way that satisfaction or failure of parity can be read directly from the completed tree; or one can build the tree without worrying too much about parity and then check when the construction is finished.

The construct-then-check order frees up the mind and so is less boring. The main job of the teacher may be to stop students from then doing the check by merely eyeballing the trees they have constructed; it should be carried out systematically. An algorithm that corresponds closely to the definition of parity iterates the following steps. Choose a branch and follow the trace of its designated crash-pair, from the crash-pair itself upwards toward the root, marking the nodes that are in the trace. When that has been done, inspect the critical nodes on that branch to see whether any one of them has a mark while its partner does not. If so, parity fails, else it succeeds for that branch and we can go on to check for the next branch. Evidently, care is needed in that transition, since a node that is common to two branches may be in the trace of the crash-pair of one of them but not in that of the other. When there are only two branches in the tree, one can place the marks recording the trace of the crash-pair on the left branch to the left of the nodes, and those for the right branch to the right. But when there are more than two branches, one needs either to erase marks made for previous branches or to use more elaborate annotations.

The check-as-you-go order of verification also involves marking nodes, but in reverse direction as one builds the tree from its root. To reduce clutter, the following ‘minimalist’ labeling can be used. Whenever one applies the counter-case rule to a node n: \(\lnot \) (\(\phi \rightarrow \psi )\), take a natural number i (say, the first one not yet used) and label the output nodes m: \(\phi \) and \(m'\): \(\lnot \) \(\psi \) by, say, \(i_{a}\), \(i_{c}\) where ‘a’ is for antecedent and ‘c’ is for consequent (don’t bother to annotate the input node n). Propagate those labels through all of the single-input decomposition rules (including the recursive rule and further applications of counter-case). For the sole double-input rule, namely, modus ponens, going from m: \(\phi \), \(m'\): \(\phi \rightarrow \psi \) to n: \(\psi \), consider the union \(L\cup L'\) of the label sets L, \(L'\) on m, \(m'\), respectively. One could propagate all labels in \(L\cup L'\) to label the output node, but it is (equivalent and) more parsimonious to proceed as follows: (i) for each i, if both marks \(i_{a}\), \(i_{c}\) are in \(L\cup L'\) then neither of them goes into the label for the output node n; (ii) otherwise whichever, if any, of \(i_{a}\), \(i_{c}\) is in \(L\cup L'\) is put into that label. To check the completed tree for parity, one inspects in turn each designated crash-pair {\(z_{1}\): \(\zeta \), \(z_{2}\): \(\lnot \) \(\zeta \)}with label sets \(L_{1}\), \(L_{2}\). If \(L_{1}\cup L_{2}\) contains \(i_{a}\) but not \(i_{c}\), or conversely, for some integer i, then parity fails; else it succeeds for that branch and we can pass to the next one. Evidently, this downwards-propagation labeling can also be effected after having completed the tree, as an alternative to the upwards-moving ‘tracing the trace’.

To summarize, we have two work schedules: construct-then-check and check-as-you-go. The latter is carried out by labeling in the direction from root to crash-pairs while the former can be done either in the same way, but after tree construction has finished, or by working backwards from crash-pairs. The author hesitates to recommend one of these options over another but feels that the check-as-you-go procedure can cramp the student’s style and more easily become centered on boring bookkeeping. It would be nice to have a software in which the user can write a candidate tree, leaving it to the computer to then check it for parity.

To the author’s surprise, students did not appear to have much difficulty handling the triple quantification \(\forall B\exists Z\forall c\) in the definition of parity (and thus the quadruple quantification \(\exists T\forall B\exists Z\forall c\) in the definition of an acceptable formula). This is perhaps because once the idea is understood it becomes quite natural, especially when expressed using an implicit choice function B \(\mapsto Z_{B}\).

When looking at specific formula, it was often convenient to make use of the fact that although modus tollens and implicative forking are not allowed as decomposition rules in the construction of directly acceptable truth-trees, they become admissible when the recursive rule is available (Observation 2.7). Modus tollens is handy when verifying the acceptability of a scheme that rather neatly reflects the roles of modus ponens and counter-case in decomposition, namely \((\lnot (\phi \rightarrow \psi )\rightarrow \theta )\leftrightarrow (\phi \rightarrow (\lnot \psi \rightarrow \theta ))\) where the biconditional abbreviates the conjunction of an arrow and its converse. Implicative forking gives a straightforward way of establishing the acceptability of formulae of the kind \(\lnot \) (\(\phi \rightarrow \psi )\), where \(\phi \), \(\lnot \) \(\psi \) are both acceptable. Contraposing \(\phi \rightarrow \psi \) provides a route for formulae of the kind (\(\phi \rightarrow \psi )\rightarrow (\phi '\rightarrow \psi )\) where \(\lnot \) \(\phi \rightarrow \lnot \phi '\) is known to be acceptable.

Finally, work on acceptability provides students with simple examples of some relatively sophisticated concepts of ‘universal logic’. It illustrates the difference between a set being computable (as is the set of directly acceptable ones) and merely semi-computable (as is, for all we know at present, the set of acceptable ones). Counter-case decomposition puts into the spotlight the difference between inferential and procedural steps in a train of reasoning. Of course, that distinction already arises in classical natural deduction, where all acts of supposition and discharge are procedural (as noted for \(\exists ^{-}\) in Sect. 2.3). To be sure, there are inferential principles underlying such procedural steps, but they are subtler than simply saying that one formula implies another; they state that if one or more inferences are valid then so is another (see, e.g., Makinson 2020, Chap. 10 for a student-oriented explanation).

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Makinson, D. (2022). Relevance-Sensitive Truth-Trees. In: Düntsch, I., Mares, E. (eds) Alasdair Urquhart on Nonclassical and Algebraic Logic and Complexity of Proofs. Outstanding Contributions to Logic, vol 22. Springer, Cham. https://doi.org/10.1007/978-3-030-71430-7_2

Download citation

Publish with us

Policies and ethics