Skip to main content
Log in

Explicating Logical Independence

  • Published:
Journal of Philosophical Logic Aims and scope Submit manuscript

Abstract

Accounts of (complete) logical independence which coincide when applied in the case of classical logic diverge elsewhere, raising the question of what a satisfactory all-purpose account of logical independence might look like. ‘All-purpose’ here means: working satisfactorily as applied across different logics, taken as consequence relations. Principal candidate characterizations of independence relative to a consequence relation are (i) that there the consequence relation concerned is determined by (= sound and complete w.r.t.) only by classes of (bivalent) valuations providing for all possible truth-value combinations for the formulas whose independence is at issue, and (ii) that the consequence relation ‘says’ nothing special about how those formulas are related that it does not say about arbitrary formulas. (The latter approach, we associate with de Jongh, though it is closely related to Marczewski’s notion of general algebraic independence, as well as to the absence of non-trivial logical relations as conceived by Lemmon.) Each of these proposals returns counterintuitive verdicts in certain cases—the truth-value inspired approach classifying certain cases one would like to describe as involving failures of independence as being cases of independence, and the de Jongh approach counting some intuitively independent pairs of formulas as not being independent after all. In final section, a modification of the latter approach is tentatively sketched to correct for these misclassifications. The attention is on conceptual clarification throughout, rather than the provision of technical results. Proofs, as well as further elaborations, are lodged in the ‘longer notes’ in a final Appendix.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The authors of [93] make a version of this claim with the qualification to “Popper” probability functions, those assigning the value 1 only to classical tautologies, but we can omit the qualification for a simpler formulation.

  2. See McKay [79], in which this result is not quite explicit but can be extracted from (the proof of) Theorem 1 there. I am grateful to McKay for looking into this matter in response to my queries, and doing the extraction in question for me as follows. Rephrasing the result as “Intermediate logic S has the Negative Disjunction Property iff \(\nvdash _{\mathsf {S}} \neg p \lor \neg \neg p\),” the non-trivial (‘only if’) direction is argued contrapositively thus. Suppose \(\nvdash _{\mathsf {S}} \neg p \lor \neg \neg p\). Then the Lindenbaum algebra of S contains the 5-element Jankov sequence algebra—the leftmost algebra depicted in Figure 1 of [79]—as a subalgebra. This entails that S has the negative disjunction property, since any disjunction in which each disjunct is negated has disjuncts which are S-provable iff they are CL-provable, and if neither is provable classically they are certainly refutable on the 5-element Jankov sequence algebra (by the considerations in [79]). Note that possessing the current (‘binary’) negative disjunction property, by contrast with the disjunction property itself, does not imply possession of the unrestricted negative disjunction property—i.e., the property that for all n, \(\vdash _{\mathsf {S}} \bigvee _{1 \leq i \leq n}(\neg A_{i})\) implies ⊢S¬Ai for some i with 1 ≤ in. See [79] for further details on this.

  3. Here only partially described—but the behaviour of the remaining sentence letters is immaterial.

  4. By the phrase ‘perfectionistic logic’, which is a bit of his own private terminology, Burgess means a version of what Anderson and Belnap [2] and Dunn [26] call the WGS criterion—for “von Wright–Geach–Smiley”—of entailment. In fact, Chapter 5 of [13] itself has the rather strained title ‘Relevantistic Logic’, concerning which an eyebrow is raised in the review [52].

  5. See the discussion in Dunn [25] if necessary.

  6. Naturally we allow ai = aj when ij here. The algebra-specific aspects of the present notation are as in Definition 4.9 and surrounding discussion. If \(\mathfrak {M}\) is a reduced (or ‘simple’ or ‘normal’) matrix in the sense of note 70 below, one might think of this as a micro-level version of the macro-level notion of independence over the class of induced bivalent valuations, both being legitimate notions of independence, the two notions equally legitimate and valuable. We do not take up this idea here beyond remarking that the situation would in that case be analogous—and not accidentally so—to Dummett’s suggestion that ingredient sense and assertoric content are equally useful notions of sentence meaning, rather than rivals ([24], p. 47).

  7. This is the perspective taken by logicians as otherwise diverse in their dispositions as Dummett, Suszko and Scott. References to relevant publications can be found at p. 299f. in [53].

  8. At p. 344 of [8], Blok and Pigozzi credit a 1973 paper by Wójcicki with introducing the notion of a reduced matrix—Wójcicki used the word simple for this; in fact, ten years earlier Smiley [104] had isolated the concept in question (using the terminology normal matrix). Smiley’s discussion even anticipates the later use by Blok and Pigozzi and the phrase Leibniz congruence, when on p. 428 he remarks that “normality is a kind of identity of indistinguishables”, though the nonstandard terminology—“indistinguishables” for “indiscernibles”—somewhat masks the allusion to Leibniz.

  9. These judgments are controversial in some quarters—e.g., Priest [91]; for further discussion and references, see Humberstone [57]. The strong three-valued Kleene matrix and the LP matrix are both associated with Kleene in [53], where they are called K1 and K1,2 respectively.

  10. In the longer note ‘Ultra-independence’ mention is made of a kind of failure of independence for this pair of pair of formulas, but that notion has not been in play elsewhere in our discussion, except perhaps under the name of semantic or conceptual dependence in the discussion preceding Remarks 6.2, including note 52.

  11. As apparently Shoesmith and Smiley have: [102], p. 247—something recalled already on p. 381 of [53].

  12. This is as in Garson [35], and numerous earlier publications; Rousseau [95] indexes the valuations and has the indices partially ordered with a persistence condition vλ(p) ≤ vμ(p) whenever λμ, to mirror the Kripke semantics. The present concept of a topoboolean formula, if not the terminology, is taken from (or at least strongly inspired by) [94] and [95]. The presentation here differs slightly, though inessentially, from that in [53] p. 620ff.

  13. Strictly, the definition using (**) should take, say, q1,…,qn as representing the distinct sentence letters in C, since we do not wish to restrict attention to just such formulas as contain the first n sentence letters in the official enumeration of all sentence letters: any n such letters will do.

  14. One might here invoke the Disjunction Property to draw an even stronger conclusion (see note 24), but here we are keeping the discussion applicable to arbitrary superintuitionistic logics.

  15. A glance at the Rieger–Nishimura lattice shows that this is so for ¬¬p, and if ¬¬(pq) any non-trivial IL-consequence in which no further variables appeared, the symmetric disposition of p and q in the ¬¬(pq) suggests that substituting p for q in such a consequence would mean that the Rieger–Nishimura lattice should have an equivalent of this formula above ¬¬p. (This does not pretend to be a conclusive argument.)

  16. This explanation is missing from [95], but can be found in Rousseau [94].

  17. As usual, “proper” is just our shorthand for non-universal (Def. 2.7(ii)); these relations are not all ‘properly ternary’ in the sense of being essentially ternary. For instance, the combination of the first two Grygiel conditions represents the ternary though ‘essentially binary’ relation relating A1,A2,A3 just in case A1 and A2 are contraries. Putting this another way: the variable p3 occurs inessentially in the conjunction of the two Ci(p1,p2,p3) formulas for those two lines. The issue of essential n-arity is treated in greater detail in Humberstone [54].

  18. These are the ‘clear’ formulas of p. 1043 (including note 8), in Humberstone and Makinson [60], q.v. for further references. A question that would seem to deserve attention is whether the class of topoboolean formulas is special—de Jongh dependent—in IL (or, according to ⊢IL): is there an IL-unprovable formula C(p) with C(A) IL-provable for all topoboolean A? Or even: is there a complete linguistic characterization of being topoboolean in the sense of a formula C(p) (or set of such formulas) for which C(A) IL-provable for all and only topoboolean A?

References

  1. Alechina, N. (2000). Functional dependencies between variables. Studia Logica, 66, 273–283.

    Google Scholar 

  2. Anderson, AR, & Belnap, ND. (1975). Entailment: the logic of relevance and necessity (Vol. I). Princeton: Princeton University Press.

    Google Scholar 

  3. Barnes, E. (1991). Beyond verisimilitude: a linguistically invariant basis for scientific progress. Synthese, 88, 309–339.

    Google Scholar 

  4. Bell, J. L., & Demopoulos, W. (1996). Elementary propositions and independence. Notre Dame Journal of Formal Logic, 37, 112–124.

    Google Scholar 

  5. Belnap, N.D. (1962). Tonk, plonk, and plink. Analysis, 22, 130–134.

    Google Scholar 

  6. Berto, F. (2019). Adding 4.0241 to TLP. to appear in G. M. Mras, P. Weingartner and B. Ritter (Eds.), Philosophy of logic and mathematics: proceedings of the 41st international Ludwig Wittgenstein Symposium. De Gruyter, Berlin (to appear).

    Google Scholar 

  7. Blamey, S, & Humberstone, L. (1991). A Perspective on modal sequent logic. Publications of the Research Institute for Mathematical Sciences, Kyoto University, 27, 763–782.

    Google Scholar 

  8. Blok, W. J., & Pigozzi, D. (1986). Protoalgebraic logics. Studia Logica, 45, 337–369.

    Google Scholar 

  9. Brown, F. M., & Rudeanu, S. (1981). Consequences, consistency and independence in Boolean algebras. Notre Dame Journal of Formal Logic, 22, 45–62.

    Google Scholar 

  10. Bunting, I. A. (1965). Some difficulties in Stenius’ account of the independence of atomic states of affairs. Australasian Journal of Philosophy, 43, 368–375.

    Google Scholar 

  11. Burgess, J.P. (1981). Relevance: a fallacy? Notre Dame Journal of Formal Logic, 22, 97–104.

    Google Scholar 

  12. Burgess, J.P. (1983). Common sense and “relevance”. Notre Dame Journal of Formal Logic, 24, 41–53.

    Google Scholar 

  13. Burgess, J.P. (2009). Philosophical logic. Princeton: Princeton University Press.

    Google Scholar 

  14. Canty, J. T., & Scharle, T. W. (1966). Note on the singularies of S5. Notre Dame Journal of Formal Logic, 7, 108.

    Google Scholar 

  15. Chellas, B. (1980). Modal logic: an introduction. Cambridge: Cambridge University Press. Reprinted with corrections 1988 and subsequent years.

    Google Scholar 

  16. Ciardelli, I. (2016). Questions in logic, Institute for Logic, Language and Computation. ILLC Dissertation Series, Amsterdam.

  17. Ciardelli, I. (2016). Dependency as question entailment. In S. Abramsky, & et al. (Eds.) , Dependence logic (pp. 129–181). Cham (Switzerland): Springer.

  18. Citkin, A. (2014). Characteristic formulas 50 years later (an algebraic account). arXiv:1407.5823v1 [math.LO].

  19. Correia, F. (2001). Logical Dependence and Independence in the Tractatus. In R. Haller, & K. Puhl (Eds.) , Wittgenstein and the future of philosophy: a reassessment after 50 years (Proceedings of the 24th international Wittgenstein symposium (Vol. 1, pp. 1–5). Kirchberg am Wechsel: Austrian Ludwig Wittgenstein Society.

  20. Dale, A. J. (1983). The non-independence of axioms in a propositional calculus formulated in terms of axiom schemata. Logique et Analyse, 26, 91–98.

    Google Scholar 

  21. Davies, M. (1981). Meaning, quantification, necessity: themes in philosophical logic. London: Routledge and Kegan Paul.

    Google Scholar 

  22. de Jongh, D. H. J. (1982). Formulas of one propositional variable in intuitionistic arithmetic. In A.S. Troelstra, & D. van Dalen (Eds.) , The L.E.J.Brouwer centenary symposium. Amsterdam: North-Holland.

  23. de Jongh, D. H. J, & Chagrova, L. A. (1995). The decidability of dependency in intuitionistic propositional logic. Journal of Symbolic Logic, 60, 498–504.

    Google Scholar 

  24. Dummett, M. A. (1991). The logical basis of metaphysics. Cambridge: Harvard University Press.

    Google Scholar 

  25. Dunn, JM. (1976). Intuitive semantics for first-degree entailments and “coupled trees”. Philosophical Studies, 29, 149–168.

    Google Scholar 

  26. Dunn, J.M. (1980). A sieve for entailments. Journal of Philosophical Logic, 9, 41–57.

    Google Scholar 

  27. Ehrenfeucht, A., & Rozenberg, G. (1990). Theory of 2-structures, part i: clans, basic subclasses, and morphisms. Theoretical Computer Science, 70, 277–303.

    Google Scholar 

  28. Fagin, R., Halpern, J. Y., Vardi, M. Y. (1992). What is an inference rule? Journal of Symbolic Logic, 57, 1018–1045.

    Google Scholar 

  29. Fagin, R., Halpern, J. Y., Vardi, M. Y. (1995). A nonstandard approach to the logical omniscience problem. Artificial Intelligence, 79, 203–240.

    Google Scholar 

  30. Fine, K. (1991). The study of ontology. Noûs, 25, 263–294.

    Google Scholar 

  31. Fitelson, B., & Hájek, A. (2017). Declarations of independence. Synthese, 194, 3979–3995.

    Google Scholar 

  32. Freund, M, & Lehmann, D. (1994). Nonmonotonic reasoning: from finitary relations to infinitary inference operations. Studia Logica, 53, 161–201.

    Google Scholar 

  33. Fricker, E. (1994). Against gullibility. In B.K. Matilal, & A. Chakrabarti (Eds.) , Knowing from words (pp. 125–161). Dordrecht: Kluwer.

  34. Galliani, P, & Väänänen, J. (2014). On dependence logic. In A. Baltag, & S. Smets (Eds.) , Johan van Benthem on logic and information dynamics (pp. 101–19). Cham (Switzerland): Springer.

  35. Garson, J. W. (2013). What logics mean. Cambridge: Cambridge University Press.

    Google Scholar 

  36. Głazek, K. (1979). Some old and new problems in (the) independence theory. Colloquium Mathematicum, 42, 127–189.

    Google Scholar 

  37. Grädel, E, & Väänänen, J. (2013). Dependence and independence. Studia Logica, 101, 399–410.

    Google Scholar 

  38. Grygiel, J. (1989). Absolutely independent axiomatizations for countable sets in classical logic. Studia Logia, 48, 77–84.

    Google Scholar 

  39. Grygiel, J. (1990). Absolutely independent sets of generators of filters in Boolean algebras. Reports on Mathematical Logic, 24, 25–35.

    Google Scholar 

  40. Harary, F. (1961). A very independent axiom system. American Mathematical Monthly, 68, 159–162.

    Google Scholar 

  41. Harary, F. (1963). A measure of axiomatic independence. Mind, 72, 143–144.

    Google Scholar 

  42. Humberstone, L. (1982). Necessary conclusions. Philosophical Studies, 41, 321–335.

    Google Scholar 

  43. Humberstone, L. (1993). Functional dependencies, supervenience, and consequence relations. Journal of Logic, Language, and Information, 2, 309–336.

    Google Scholar 

  44. Humberstone, L. (1997). Singulary extensional connectives: a closer look. Journal of Philosophical Logic, 26, 341–356.

    Google Scholar 

  45. Humberstone, L. (1997). Two types of circularity. Philosophy and Phenomenological Research, 57, 249–280.

    Google Scholar 

  46. Humberstone, L. (2000). Parts and partitions. Theoria, 66, 41–82.

    Google Scholar 

  47. Humberstone, L. (2001). The pleasures of anticipation: enriching intuitionistic logic. Journal of Philosophical Logic, 30, 395–438.

    Google Scholar 

  48. Humberstone, L. (2002). Implicational converses. Logique et Analyse, 45, 61–79.

    Google Scholar 

  49. Humberstone, L. (2004). Archetypal forms of inference. Synthese, 141, 45–76.

    Google Scholar 

  50. Humberstone, L. (2005). Modality. In F.C. Jackson, & M. Smith (Eds.) , The Oxford handbook of contemporary philosophy, chapter 20 (pp. 534–614). Oxford and New York: Oxford University Press.

  51. Humberstone, L. (2006). Extensions of intuitionistic logic without the deduction theorem: some simple examples. Reports on Mathematical Logic, 40, 45–82.

    Google Scholar 

  52. Humberstone, L. (2010). Review of Burgess [13]. Bulletin of Symbolic Logic, 16, 411–413.

    Google Scholar 

  53. Humberstone, L. (2011). The connectives. Cambridge: MIT Press.

    Google Scholar 

  54. Humberstone, L. (2013). Logical relations. Philosophical Perspectives, 27, 176–230.

    Google Scholar 

  55. Humberstone, L. (2015). Sentence connectives in formal logic. In E.N. Zalta (Ed.) , Stanford Encyclopedia of Philosophy (Fall 2015 Edition). <http://plato.stanford.edu/archives/fall2015/entries/connectives-logic/>.

  56. Humberstone, L. (2016). Philosophical applications of modal logic. London: College Publications.

    Google Scholar 

  57. Humberstone, L. Priest on negation, to appear in Can Baskent and Thomas Ferguson (Eds.), Graham Priest on dialetheism and paraconsistency. Cham (Switzerland): Springer.

  58. Humberstone, L. (2019). Supervenience, dependence, disjunction. Logic and Logical Philosophy, 28, 3–135.

    Google Scholar 

  59. Humberstone, L. Semantics without toil? Brady and Rush Meet Halldén’, to appear in a special issue of the journal Organon F on the legacy of C. I. Lewis.

  60. Humberstone, L, & Makinson, D. (2011). Intuitionistic logic and elementary rules. Mind, 120, 1035–1051.

    Google Scholar 

  61. Jankov, V. A. (1968). The construction of a sequence of strongly independent superintuitionistic propositional calculi. Soviet Mathematics Doklady, 9, 806–807.

    Google Scholar 

  62. Kaufmann, I. (1995). O- and D-predicates: a semantic approach to the unaccusative-unergative distinction. Journal of Semantics, 12, 377–427.

    Google Scholar 

  63. Kern-Isberner, G., & Huvermann, D. (2017). What kind of independence do we need for multiple iterated belief change? Journal of Applied Logic, 22, 91–119.

    Google Scholar 

  64. Khamara, E.J. Modality in Aristotle’s De Interpretatione, archived (since 2007) at http://philpapers.org/rec/KHAD.

  65. Kjellberg, G. (1959). Logical and other kinds of independence. In Proceedings of an international symposium on the theory of switching, 2–5 April 1957, Part 1. (Annals of the Computation Laboratory of Harvard University Volume 29) (pp. 117–124). Cambridge: Harvard University Press.

  66. Koslicki, K. (2013). Ontological dependence: an opinionated survey. In M. Hoeltje, B. Schnieder, A. Steinberg (Eds.) , Varieties of dependence: ontological dependence, grounding, supervenience, response-dependence (pp. 31–64). Munich: Philosophia Verlag.

  67. Kowalski, T, & Humberstone, L. (2016). An Abelian rule for BCI—and variations. Notre Dame Journal of Formal Logic, 57, 551–568.

    Google Scholar 

  68. Kripke, S. A. (1963). “Flexible” predicates of formal number theory. Proceedings of American Mathematical Society, 13, 647–650.

    Google Scholar 

  69. Kuhn, S.T., & Weatherson, B. (2018). Notes on some ideas in Lloyd Humberstone’s philosophical applications of modal logic. Australasian Journal of Logic, 15, 1–18.

    Google Scholar 

  70. Lang, J., Liberatore, P., Marquis, P. (2003). Propositional independence: formula-variable independence and forgetting. Journal of Artificial Intelligence Research, 18, 391–443.

    Google Scholar 

  71. Lemmon, E. J. (1965). Beginning logic. London: Nelson.

    Google Scholar 

  72. Lewis, D. (1988). Relevant implication. Theoria, 54, 161–174.

    Google Scholar 

  73. Makinson, D. (1973). A warning about the choice of primitive operators in modal logic. Journal of Philosophical Logic, 2, 193–196.

    Google Scholar 

  74. Makinson, D. (1989). General theory of cumulative inference. In M. Reinfrank (Ed.) , Nonmonotonic reasoning, lecture notes in AI #346 (pp. 1–18). Berlin: Springer.

    Google Scholar 

  75. Marcos, J. (2007). Ineffable inconsistencies. In J.-Y. Béziau, W. Carnielli, D. Gabbay (Eds.) , Handbook of paraconsistency (pp. 341–352). London: College Publications.

  76. Marczewski, E. (1958). A general scheme of the notions of independence in mathematics. Bulletin de l’Académie Polonaise des Sciences (Série des Sciences Mathématiques, Astronomiques, et Physiques), 6, 731–736.

    Google Scholar 

  77. Marczewski, E. (1960). Independence in algebras of sets and Boolean algebras. Fundamenta Mathematica, 48, 135–145.

    Google Scholar 

  78. Massey, G. J. (1968). Normal form generation of S5 functions via truth functions. Notre Dame Journal of Formal Logic, 9, 81–85.

    Google Scholar 

  79. McKay, C. G. (2018). On the negative disjunction property. Australasian Journal of Logic, 15, 19–24.

    Google Scholar 

  80. McKee, T. A. (1985). Generalized equivalence: a pattern of mathematical expression. Studia Logica, 44, 285–289.

    Google Scholar 

  81. McKinsey, J. C. C. (1943). Review of Shianghaw Wang, ‘A system of completely independent axioms for the sequence of natural numbers’. Journal of Symbolic Logic, 8, 84.

    Google Scholar 

  82. Meyer, R. K., & Routley, R. (1974). Classical relevant logics, II. Studia Logica, 33, 183–194.

    Google Scholar 

  83. Michael, M. (1987). Formal semantics and the meaning of logical connectives: the case of relevant logic. Honours thesis, Monash University Department of Philosophy.

  84. Miller, D. (1974). Popper’s qualitative theory of verisimilitude. British Journal for the Philosophy of Science, 25, 166–177.

    Google Scholar 

  85. Miller, D. (1977). The uniqueness of atomic facts in Wittgenstein’s Tractatus. Theoria, 43, 174–185.

    Google Scholar 

  86. Movsisyan, Y. M., & Aslanyan, V. A. (2014). De Morgan functions and free De Morgan algebras. Demonstratio Mathematica, 47, 271–283.

    Google Scholar 

  87. Oller, C. (2014). Is classical negation a contradictory-forming operator? Notae Philosophicae Scientiae Formalis, 3, 1–7.

    Google Scholar 

  88. Połacik, T, & Humberstone, L. (2018). Classically archetypal rules. Review of Symbolic Logic, 11, 279–294.

    Google Scholar 

  89. Porte, J. (1960). Un Système pour le Calcul des Propositions Classiques où la Règle de Détachement n’est pas Valable. Comptes Rendues Hebdomadaires des Séances de l’Académie des Sciences de Paris, 251, 188–189.

    Google Scholar 

  90. Potts, D. H. (1974). Review of Harary [41] and [42]. Journal of Symbolic Logic, 39, 604.

    Google Scholar 

  91. Priest, G. (2006). Doubt truth to be a liar. Oxford: Oxford University Press.

    Google Scholar 

  92. Rivieccio, U. (2012). An infinity of super-Belnap logics. Journal of Applied Non-Classical Logics, 22, 319–335.

    Google Scholar 

  93. Roeper, P, & Leblanc, H. (1995). Of A and B being logically independent of each other and of their having no common factual content. Theoria, 61, 61–79.

    Google Scholar 

  94. Rousseau, GF. (1968). Sheffer functions in intuitionistic logic. Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 14, 279–282.

    Google Scholar 

  95. Rousseau, GF. (1970). The separation theorem for fragments of the intuitionistic propositional calculus. Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 16, 469–474.

    Google Scholar 

  96. Routley, R, & Routley, V. (1972). Semantics of first-degree entailment. Noûs, 3, 335–359.

    Google Scholar 

  97. Sally, P.J. (2008). Tools of the trade: introduction to advanced mathematics. Providence: American Mathematical Society.

    Google Scholar 

  98. Sandu, G. (2012). Independence-friendly logic: dependence and independence of quantifiers in logic. Philosophy Compass, 7, 691–711.

    Google Scholar 

  99. Sanford, D.H. (1981). Independent predicates. American Philosophical Quarterly, 18, 171–174.

    Google Scholar 

  100. Segerberg, K. (1986). Modal logics with functional alternative relations. Notre Dame Journal of Formal Logic, 27, 504–522.

    Google Scholar 

  101. Shoesmith, D. J., & Smiley, T.J. (1971). Deducibility and many-valuedness. Journal of Symbolic Logic, 36, 610–622.

    Google Scholar 

  102. Shoesmith, D. J., & Smiley, T. J. (1978). Multiple-conclusion logic. Cambridge: Cambridge University Press.

    Google Scholar 

  103. Simons, P.M. (1981). Logical and ontological independence in the tractatus. In E. Morscher, & R. Stranzinger (Eds.) , Ethics: foundations, problems and applications (pp. 464–467). Vienna: Holder–Pichler–Tempsky.

  104. Smiley, T. J. (1962). The independence of connectives. Journal of Symbolic Logic, 27, 426–436.

    Google Scholar 

  105. Stefanutti, L. (2008). A characterization of the concept of independence in knowledge structures. Journal of Mathematical Psychology, 52, 207–217.

    Google Scholar 

  106. Steinberger, F. (2011). Why conclusions should remain single. Journal of Philosophical Logic, 40, 333–355.

    Google Scholar 

  107. Tennant, N. (1987). Anti-realism and logic. Oxford: Oxford University Press.

    Google Scholar 

  108. Thomason, R. H. (1970). Indeterminist time and truth-value gaps. Theoria, 36, 264–281.

    Google Scholar 

  109. Thomason, S. K. (1980). Independent propositional modal logics. Studia Logica, 39, 143–144.

    Google Scholar 

  110. Troelstra, A. S. (1965). On intermediate propositional logics. Indagationes Mathematicae, 27, 141–152. [ = Koninklijke Nederlandse Akademie van Wetenschappen, Procs., Series A, vol. 68].

    Google Scholar 

  111. Umezawa, T. (1959). On intermediate propositional logics. Journal of Symbolic Logic, 24, 20–36.

    Google Scholar 

  112. Väänänen, J. (2007). Dependence logic: a new approach to independence friendly logic. London Mathematical Society student texts #70. Cambridge: Cambridge University Press.

    Google Scholar 

  113. van Benthem, J. (1997). Modal foundations for predicate logic. Logic Journal of the IGPL, 5, 259–286.

    Google Scholar 

  114. van Rooij, R. (2007). Strengthening conditional presuppositions. Journal of Semantics, 24, 289–304.

    Google Scholar 

  115. Wansing, H. (2006). Contradiction and contrariety: priest on negation. In J. Malinowski, & A. Pietruszczak (Eds.) , Essays in logic and ontology (pp. 81–93). Amsterdam: Rodopi.

  116. Williamson, T. (1992). An alternative rule of disjunction in modal logic. Notre Dame Journal of Formal Logic, 33, 89–100.

    Google Scholar 

  117. Williamson, T. (1995). Is knowing a state of mind? Mind, 104, 533–565.

    Google Scholar 

  118. Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.

    Google Scholar 

  119. Wojtylak, P. (1989). Independent axiomatizability of sets of sentences. Annals of Pure and Applied Logic, 44, 259–299.

    Google Scholar 

  120. Wolniewicz, B. (1970). Four notions of independence. Theoria, 36, 161–164.

    Google Scholar 

  121. Yablo, S. (2014). Aboutness. Princeton: Princeton University Press.

    Google Scholar 

  122. Zolin, E. E. (2000). Embeddings of propositional monomodal logics. Logic Journal of the IGPL, 8, 861–882.

    Google Scholar 

Download references

Acknowledgments

I am grateful to Rohan French for several suggested improvements to this paper, to Craig McKay for sharing his observations on the Negative Disjunction Property with me ([79] and the unnumbered Proposition appearing in the Intermediate-logical Postscript to Section 3—longer notes for that section), and to a referee for this journal for numerous corrections, suggestions and questions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lloyd Humberstone.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Longer Notes

Appendix: Longer Notes

1.1 On Section 1

Elaboration of note 3

That note mentions dependence and independence logic à la as in Grädel and Väänänen [37]. Talk of dependence here is cognate with talk of the dependence of a function on one or more of its arguments, and of one variable on another (see Alechina [1]), as well as on W. W. Armstrong’s notion of functional dependencies in the theory of relational databases, and this last can be assimilated related to simple logical independence via the analogy between functional dependencies (in this last sense) and consequence relations noted in the Appendix to Fine [30] and pursued in Humberstone [43]; more recent discussions of Armstrong’s conditions on functional dependencies appear in Grädel and Väänänen [37], Galliani and Väänänen [34], as adjuncts to the recent and now merging fields of dependence logic and inquisitive logic—see Väänänen [112] and Ciardelli [16] or [17]. When the present paper was aired in an early form at the Australasian Association for Logic annual conference in Melbourne (June 30–July 2, 2016), it bore the title ‘Independence in Logic’, but it has since become evident that such a title would risk arousing expectations of Väänänen’s dependence and independence logics, so the title has been changed to forestall any disappointment on this score. Some connections between dependence in the ‘(modal) dependence logic’ sense and the philosophically fruitful—though somewhat controversial—notion of supervenience can be found in Section 1–3 of Humberstone [58]. We revisit one aspect of this theme in the course of the longer note below, ‘More on Grygiel, Galliani–Väänänen, and Kjellberg.

On Grygiel and Others

The notion of complete independence from Section 1 is called absolute independence in Grygiel [39], and instead of citing E. H. Moore in connection with it, cites three references by way of historical background, the earliest of which is a 1963 paper by Kripke (namely [68]), in which a theory-relative version of the notion appears. The phrase absolutely independent was also used in Harary [40] in another way, defined by him to apply to a set of axioms when for each of its subsets there is a model in which the remaining axioms hold but each of those in the subset in question “never holds”. Unfortunately the notion of never holding, as opposed to merely not holding, was not well-defined—at least if one wants the definition to rule in the same way for equivalent formulas—undermining the whole exercise: see Potts [90]. Comprehensive information on the subject of simple independence can be found in Wojtylak [119], which includes references to the work of Tarski and of Reznikoff. For (simple) independence of axioms vs. independence of axiom-schemata, see Dale [20]; Dale shows concerning a popular axiomatization of classical propositional logic using axiom-schemata and Modus Ponens as the sole rule, that none of the (infinitely many) axioms instantiating these schemata is independent, although each schema is independent in the sense that if it were dropped not all of its instances would then be provable. There is also of course a notion of simple independence for rules in an axiom system, or rather two such notions, distinguished as T-independence and D-independence in Porte [89], i.e., non-admissibility on the basis of the remaining rules (including axioms as 0-premiss rules) and non-derivability on the basis of the remaining rules.

Specificational Independence: a Passage from Williamson

We continue note 6, which ended with the remark that the FV-independence of [70] amounted to a sentence letter’s not occurring essentially in a formula: i.e., in its not occurring in every formula equivalent to the given formula. Of course this (a) includes the case of not occurring in the formula at all and (b) is a logic-relative matter, since logics will rule differently on what is equivalent to what. (For example, q occurs essentially in q ∨¬q relative to intuitionistic logic but not relative to classical logic.) Fixing on some logic, the content of, or proposition expressed by, a formula is often thought of the class of formulas equivalent to that formula (or when this is different, the class of formulas synonymous with that formula, in a sense recalled in Section 7), and so not being part of the content of a formula would be a matter being absent from at least one—rather than from all—of the formulas in this class. In an informal sense, one can specify the content independently of the given sentence letter because one can cite one of the equivalent formulas in which it does not occur. Here is a quotation from Williamson [117], p. 542 (or [118], p. 32), speaking in a similar vein though not with any particular formal logic in mind:

If G is necessary for F, there need be no further condition H, specifiable independently of F, such that the conjunction of G and H is necessary and sufficient for F. Being coloured, for example, is necessary for being red, but if one seeks a further condition whose conjunction with being coloured is necessary and sufficient for being red, one finds only conditions specified in terms of “red”: being red; being red if coloured.

We can add a purely formal example of the same kind of thing. Let us think of a loose analogue of being red as given by the sentence letter p, and of the analogue of being coloured as given by the disjunction pq. Now the claim in question turns out to be that there is no formula B in which p does not occur—corresponding to a condition specified without use of “red”—such that

$$ (p \lor q ) \land B \dashv\vdash p, $$

where ⊢ is any reasonable consequence relation—for example any ⊢⊆⊢CL. For definiteness we can take ⊢ as ⊢CL itself, since if there is (as we shall see there is) no such B available in that case, there won’t be relative to any weaker consequence relation. From the “⊣” direction of the envisaged equivalence, we have that pB. So, by the uniformity/cancellation property (mentioned in the longer note ‘Ultra-independence’ below) of ⊢CL, since ex hypothesi the formula on the left shares no sentence letters with that on the right, either everything is a ⊢-consequence of that on the left—which is evidently not the case—or else the formula B is a ⊢-consequence of every formula, in which case it is a classical tautology and the left-hand side of the inset equivalence is in turn equivalent to pq and so the ⊢-direction of that equivalence tells us that pqp, which is again evidently not the case. So there is no such B. (Note that we are not claiming that this argument goes through for every ⊢⊆⊢CL, which would be false since some such sublogics of classical logic—for example Minimal Logic—lack the uniformity property appealed to. The claim is, rather, that since the conclusion of the argument is that there is no B satisfying the equivalence inset above for ⊢=⊢CL, this conclusion holds for any ⊢⊆⊢CL.)

More on Grygiel, Galliani–Väänänen, and Kjellberg

The first of the intermediate notions mentioned in the paragraph (nearly) ending with note 6 seems to be being confused with complete—or as she calls it (cf. the longer note above) absolute—independence on the opening page of Grygiel [38], where the ‘absolute independence’ of the continuum hypothesis in standard set theory is glossed as consisting in the fact that neither it nor its negation is derivable from the remaining axioms. In the case in which only two formulas, A and B are involved, simple independence means that neither is a consequence of the other, and the present intermediate notion adds the requirement that the negation of B is not a consequence of A (or equivalently, the negation of A is not a consequence of B), while complete independence adds the further requirement that B is not a consequence of the negation of A (or equivalently, that A is not a consequence of the negation of B). These formulations are suitable for the case in which negation treated is as by classical logic—the background assumed for most of Grygiel’s discussion—and the final requirement addition would evidently not go through as given here if we had, for instance, intuitionistic logic in mind, in which for example the parenthetical “equivalently” claims would fail. (Grygiel touches on the intuitionistic case in her discussion, as we shall see in Section 3.)

A similar inaccuracy arises in the discussion by Galliani and Väänänen ([34], p. 111) of the relation between the notion of independence they are primarily interested in and write as “\(\vec {x} \bot \vec {y}\) ” on the one hand, and independence of axioms on the other. Here \(\vec {x}\) and \(\vec {y}\) are sequence of individual variables, not necessarily of the same length, and for a function s assigning elements of the domain to these variables \(s(\vec {x})\), \(s(\vec {y})\) are the corresponding sequences of elements, and a set S of such assignments is defined ([34], p. 109) to satisfy the formula \(\vec {x} \bot \vec {y}\) just in case

$$\forall s, s^{\prime} \in S\exists s^{\prime\prime}\in S\left( s^{\prime\prime}(\vec{y}) = s(\vec{y}) \land s^{\prime\prime}(\vec{x}) = s^{\prime}(\vec{x})\right).$$

Of particular significance for a comparison, in the quotation to follow, with independence of axioms is the case in which, if we are thinking of “⊥” as meaning that what is on its left is independent of what is on its right in “\(\vec {x} \bot \vec {y}\)” is the case in which on the left there appears just a single variable.

There is an earlier common use of the concept of independence in logic, namely the independence of a set Σ of axioms from each other. This is usually taken to mean that no axiom is provable from the remaining ones. By Gödel’s Completeness Theorem this means the same as having for each axiom ϕ ∈Σ a model of the remaining ones Σ ∖{ϕ} in which ϕ is false. This is not so far from the independence concept \(\vec {y}\bot \vec {x}\). Again, the idea is that from the truth of Σ ∖{ϕ} we can say nothing about the truth-value of ϕ. This is the sense in which the Continuum Hypothesis (CH) is independent of ZFC. Knowing the ZFC axioms gives us no clue as to the truth or falsity of CH. In a sense our independence atom \(\vec {y}\bot \vec {x}\) is the familiar concept of independence transferred from the world of formulas to the world of elements of models, from truth-values to variable values.

The passage just quoted is rather confusing: the part about having “for each axiom ϕ ∈Σ a model of the remaining ones Σ ∖{ϕ} in which ϕ is false,” would mean in the special case of ϕ as CH and Σ is Σ0 ∪{CH} where Σ0 is a set of axioms for ZFC, that from the truth of the ZFC axioms we cannot infer that CH is true, it does not mean that “knowing the ZFC axioms gives us no clue as to the truth or falsity of CH” since it is compatible with the falsity of CH following from the axioms. So while the initial gloss in terms of models is simple independence, what this last gloss suggests is at issue is the intermediate notion of independence alluded to above. Citing this particular example from axiomatic set theory is confusing because most readers will know that the consistency of CH with ZFC had already been shown (by Gödel) before the simple independence of CH from ZFC was shown (by Cohen), so it is natural to regard Cohen as having thereby shown the independence in this intermediate sense—neither it nor its negation derivable from the other axioms—as a corollary of simple independence via the already known consistency result. But now, how do we relate this to the “⊥” idea, which Galliano and Väänänen independence of axioms is “not so far away” from? (We return to a variation on the above “∀s,sS” condition in Definition 2.3(i) and Proposition 2.4, relating it to complete independence rather than to any kind of independence that might typically be worried about in connection with candidate axioms. “Typically” is inserted here because Grygiel [38] addresses itself to the complete independence of sets of axioms—though the word “axioms” here could just as well be “sentences” or “formulas”, since it is hard to see why, once one has an irredundant axiomatic basis on one’s hands, one should take any further interest in its elements being completely independent. The same applies in connection with the discussion of p. 43 of Sally [97], with its section headed ‘The Complete Independence of Axiom Systems’; indeed the present sceptical note about the interest of the complete independence of a set of formulas qua axioms was sounded already by McKinsey in the 1940s, in the final sentence of his [81].)

A similar misstatement occurs in the third (and final) sentence of the following passage from Kjellberg [65], p. 117:

When we say that n propositional variables are independent, we mean that no combination of values of n − 1 of them determines the value of the remaining one. An equivalent statement is that all 2n combinations of values are possible. In particular, this is the usual meaning of independence of axioms, where the ‘values’ of an axiom are the two possibilities that it is satisfied or not satisfied.

No, this is not the usual meaning of talk of independence of axioms, in connection with which, for each candidate axiom, one wants a model satisfying the remaining axioms but not it—and possibly also for a (dare I say ‘colloquial’?) usage which builds in consistency—a model satisfying the remaining axioms as well as the given axiom. But no-one interprets the claim that, for example, an axiomatization with three axioms the second and third being of the forms ∃xy(Rxy) and ∃xyRxy), R some dyadic predicate symbol has axioms which are not independent on the grounds that whatever the first axiom is, there is no model in which it is true and the other two are false, or in which the first is true and these two are false. Simple independence is not complete independence! (Consider also the case of classical propositional calculus formulated with axiom-schemata and Modus Ponens as the sole rule, as in the brief discussion of A. J. Dale in the longer note ‘On Grygiel and others’ above, and suppose we have managed to find a basis not exhibiting the ‘Dale phenomenon’, i.e. a basis in which none of the instances of these schemata is derivable from all the rest. The confusion just exhibited would provoke the protest that to derive anything new from the axioms we need to apply Modus Ponens and thus need two axioms respectively of the forms A and AB, but then which are not ‘completely independent’ since no Boolean valuation falsifies both. In terminology introduced in later sections, this would be put in terms of standing in the logical relation of subcontrariety.)

While we have this passage from Kjellberg before us, it is worth attending to the first two sentences also. Although this begins with a reference to n propositional variables, since what is being explained is what the relevant meaning of “independent” is, this seems better taken as a definition of what it is for n (arbitrary) formulas to be independent, coupled with the remark that any n propositional variables are independent in the sense explained. The second formulation offered by Kjellberg amounts to that provided by Definition 2.1 of what it is for the set of formulas Γ to be independent over a set of valuations, for Kjellberg’s case taking Γ as finite (and the set of valuations to comprise the Boolean valuations); in fact the finiteness or otherwise of the set of formulas is not relevant to the equivalence of the two formulations. What is especially significant is Kjellberg’s initial formulation, because it leads us to recognise that there is after all a connection between simple independence and complete independence—though not quite the one that the preceding paragraph charges Kjellberg with confusingly forging. To explain this connection we help ourselves to material explained in Section 2, and in particular not only to Definition 2.1 but also to Definition 2.10(iii). The latter defines the consequence relation ⊢V, as we may call it here, determined by a class V of valuations, by putting Γ ⊢VB iff every vV verifying each A ∈Γ verifies B. For the sake of the present remarks (as in Section 1 of Humberstone [58] and works there cited), we might more clearly call this the consequence relation inference-determined by V, and for emphasis write \(\vdash _{V}^{\mathsf {inf}}\). This we distinguish from the consequence relation \(\vdash _{V}^{\mathsf {svc}}\)supervenience-determined by V, which relates Γ to A when every pair of valuations from V which agree on each A ∈Γ agree on B. Here, saying that u,v agree on A just means that u(A) = v(A). To avoid confusion with these uses of the word determine, let us reformulate Kjellberg’s point, substituting “fixes” for “determines”, as saying that complete independence over V coincides with its being the case that no combination of values assigned by the valuations in V to n − 1 of the n formulas concerned fixes the value of the remaining formula. We can take the claim that, for instance no such combination of values assigned by valuations in V to the formulas A,B,C fixes the value of D, to be the claim that \(A, B, C \nvdash _{V}^{\mathsf {svc}} D\), since this says that V harbours valuations agreeing on A,B,C but differing on D. So the complete independence over V of A,B,C,D amounts to:

$$ A,B, C \nvdash_{V}^\mathsf{svc} D~\text{and}~A, B, D \nvdash_{V}^\mathsf{svc} C~\text{and}~A, C, D \nvdash_{V}^\mathsf{svc} B~\text{and}~B, C, D \nvdash_{V}^\mathsf{svc} A,$$

illustrating the connection with simple independence. Complete independence over V coincides with simple independence—no formula being a consequence of the rest—not w.r.t. the consequence relation \(\vdash _{V}^{\mathsf {inf}}\) (the complaint of the previous paragraph), but w.r.t. the consequence relation \(\vdash _{V}^{\mathsf {svc}}\). Thus the comment above to the effect that simple independence is not complete independence is to be interpreted with both references to independence relativized to the same thing—in the formulation just given, the same consequence relation. (We could make a similar contrast using classes of valuations: complete independence over V is simple independence over the class of equivalential combinations of valuations in V, where the equivalential combination uv of valuations u,v is defined by: uv(A) = T iff u(A) = v(A). For more, see Humberstone [53], p. 1131f., and discussion after Proposition 1.2 in [58].)

More on Notions of Independence

In the case of probabilistic independence, the interpretation-relativity mentioned in Section 1 might be better put in terms of relativity to a particular probability function; we can abstract from this by considering independence relative to all such functions, a relation which Roeper and Leblanc in [93] show coincides with complete logical independence,Footnote 1 which is the relation of interest for the present discussion. Numerous further notions of independence have also been the focus of logical attention, such as definitional independence and functional (in)dependence in the sense of Smiley [104] (see also p. 628 of [53] for more general orientation concerning this concept, which, incidentally, should not be confused with the functional dependencies mentioned in the elaboration of note 3) at the start of the Appendix, and David Lewis’s ideas of orthogonality and non-overlap of subject matters (see [72])—the first of which is closely related to the ultra-independence notion discussed below. (See also van Rooij [114]; a broadly similar notion of orthogonality for statements—or more precisely formulas—features in Proposition 2.4.) Sanford [99] discusses a notion of independence for predicates which differs from the obvious adaptation one could give of a notion of (complete) logical independence for statements, though since he is not discussing non-classical logics, Sanford takes the latter as unproblematic. For the kind of independence he has in mind, the predicates “has a length of between 5 and 10 centimetres” and “has a length of between 10 and 20 centimetres” are not independent, their logical independence notwithstanding—intuitively because of their same-dimensionality, though [99] should be consulted for his favoured account of this feature.

Philosophical interest in the notion of complete independence was stimulated by its role in Wittgenstein’s Tractatus, and in the commentaries on it by Eric Stenius and Max Black. There is an extensive journal literature on the topic responding directly or indirectly to this stimulus, to which the paper already mentioned—Miller [85]—belongs, as do the following (listed in chronological order): Bunting [10], Wolniewicz [120], Simons [103], Bell and Demopoulos [4], and Correia [19]. While these discussions of complete independence are at least indirectly motivated by concern with philosophical questions (about logical atomism), there has also been considerable interest in the unification of various cases of talk of independence in diverse areas of mathematics, the best known such unifying enterprise being the work of Marczewski and collaborators, beginning in 1958 with [76]. A survey of these developments is provided by Głazek [36], which has an extensive bibliography. We return to this topic in Section 4.

Ultra-independence

The term ‘ultra-independence’ is introduced in (note 25 and) Section 4 of Humberstone [46], for the relation between two variable-disjoint formulas of a propositional language—formulas sharing no sentence letters in their construction, that is—when and neither of them being either provable or refutable in some logic understood in the context, which for [46] was classical propositional logic. This relation, for that choice of logic at least, is indeed a strengthening of independence, by Proposition 2.4 and the well-known fact that any two variable-disjoint formulas (of the language of classical propositional logic) are orthogonal over the class of Boolean valuations. As mentioned in Section 1, though, this terminology is potentially problematic, in that, for a different logical setting, two formulas’ being ultra-independent does not guarantee, as the terminology suggests it should, that those formulas are independent, on any (plausible) understanding of what (complete) independence consists in. Any case of Halldén incompleteness, for example, presents a counterexample. Take the case of the smallest normal modal logic K, for instance in which the variable-disjoint disjunction □(p ∧¬p) ∨ ◊(qq) is provable despite the unprovability (and the unrefutability, the disjuncts being equivalent to each other’s negations) of either disjunct—where the parenthetical observation here attests to their non-independence. (Another issue here, even in the absence of this kind of counterexample: anything called ultra-independence should be de-syntacticized at least to the extent that formulas A and B should stand in this relation when they are respectively equivalent to variable-disjoint formulas A, B, even if A and B are not themselves variable-disjoint.)

The orthogonality of variable-disjoint formulas extends to any logic with a characteristic matrix—any many-valued logic in the sense of the phrase in which ‘many’ does not imply ‘more than two’—and is intimately connected not only with such connective-specific properties as Halldén completeness (whose interest depends the presence of disjunction, or some suitable substitute, and its behaving as expected) but with the more abstract property variously known as ‘uniformity’ or ‘cancellation’ (Shoesmith and Smiley [102], pp. 270, 272 and 278, and Section 4 of Humberstone [59]). In [46], it is deployed to throw some light on the behaviour of logical subtraction. See also Exercise 5.23.7 and Remark 5.23.8 on p. 688 of [53], where this terminology is not used, but mention is made of the fact that although p and pq can take any pair of truth-values classically, the waypq can be true or false is constrained by the wayp is true or false, and it is also remarked that there is a problem of unwanted language-dependence here—sensitivity to the choice of sentence letters: trading in p ⇔ for a new sentence letter is the core of the language dependence objection (to Pavel Tichý’s account of verisimilitude) in Section 6 of Miller [84]; see also Miller [85], leading to the refinements and elaborations of [46]. But talk of the way compounds get the truth-values they have is alive and well in Yablo [121], e.g., p. 40 (and p. 45, for “how B is true”); further discussion and references can be found in §4 of Berto [6]. Indeed there may be a reply to the language-dependence objection, which presumes that languages are intertranslatable if for each sentence of either language there is a sentence of the other language which is true under the same conditions; one option would be to demand more: that for each sentence of either language there is a sentence of the other language which is true in the same way under the same conditions.

1.2 On Section 2

Proof of Proposition 2.8

Beginning with a statement of the result:

Proposition 2.8

LetSbe a set of formulas closed under Uniform Substitution.For each n-ary connective (of the language concerned)#,we define for all formulasA1,…,An,\(\mathcal {R}^{\#}(A_{1},\ldots ,A_{n})\)iff#(A1,…,An) ∈S,and\(\overline {\mathcal {R}}^{\#}(A_{1},\ldots ,A_{n})\)otherwise.Then for n-ary connectivesand,if\(\overline {\mathcal {R}}^{\circ } = \mathcal {R}^{\star }\)then\(\mathcal {R}^{\circ }\)iseither the universal or the empty n-ary relation onformulas of the language, as (therefore) is its complement\(\mathcal {R}^{\star }\).

Proof

Assume that S, \(\mathcal {R}^{\circ }\), and \(\mathcal {R}^{\star }\) are as described, with a view to showing that \(\mathcal {R}^{\circ }\) is either universal or empty, which will be done by showing that for any formulas A1,…,An, B1,…,Bn, if \(\mathcal {R}^{\circ }(A_{1},\ldots ,A_{n})\) then \(\mathcal {R}^{\circ }(B_{1},\ldots ,B_{n})\). Unpacking the \(\mathcal {R}\) notation, and writing “⊢SC” for “CS”, this means we have to show that

$$\vdash_\mathsf{S}\!{\circ}(A_{1},\ldots,A_{n}) \Longrightarrow \vdash_\mathsf{S}\! {\circ}(B_{1},\ldots,B_{n}).$$

We make use of the assumption that \(\overline {\mathcal {R}}^{\circ } = \mathcal {R}^{\star }\), or equivalently, \(\mathcal {R}^{\circ } = \overline {\mathcal {R}}^{\star }\), i.e., for all formulas C1,…,Cn

$$ \vdash_\mathsf{S}\!{\circ}(C_{1},\ldots,C_{n}) \Longleftrightarrow \nvdash_\mathsf{S}\! {\star}(C_{1},\ldots,C_{n}) $$
(1)

Suppose, then, that ⊢S ∘(A1,…,An). By (‡), \(\nvdash _{\mathsf {S}}\!{\star }(A_{1},\ldots ,A_{n})\), so, since S is closed under uniform substitution, \(\nvdash _{\mathsf {S}}\!{\star }(p_{1},\ldots ,p_{n})\) and by (‡) again ⊢S ∘ (p1,…,pn), from which a second appeal to uniform substitution delivers the desired conclusion that ⊢S ∘ (B1,…,Bn). □

On Sequents and Rules

If we were conceiving of logical relations and of independence according to logics conceived otherwise than as consequence relations, for example (as urged for various—broadly inferentialist—purposes in [53]) as sets of sequent-to-sequent rules, including zero-premiss such rules, things might naturally go differently. On this conception, different proof-systems would be said to induce the same logic not just when the same sequents were provable on the basis of them but when the same rules were derivable from them. (Here we are thinking of sequents as Set-Fmla sequents, which is to say as having exactly one formula on the right of the sequent separator, the latter written here as “ ”.) Extending a logic so conceived would then require retaining such rules and so a subcontrariety rule for A and B taking us from sequents Γ, AC and Γ, BC to Γ C would be retained on passage to an extension. (Note the emphasis on derivable rather than merely admissible sequent-to-sequent rules.) But in the present paper we are doing the best we can for the notions in play when logics are identified (no doubt too crudely) with consequence relations. (Exception: the condition that a consequence relation should coincide with that generated by a set of rules plays an important role in Definition 7.1(ii) introducing a notion featuring in our final proposal concerning independence.) Although for Example 2.14, the issue could be settled instead by passing to generalized consequence relations, that is not in general the case: the difference would re-emerge as that between such relations on the one hand—essentially (certain) sets of Set-Set sequents—and sets of sequent-to-sequent rules in the framework Set-Set.

Comparison with Rules

After Example 2.15 it was mentioned that instead of thinking of \(\mathcal {R}^{\lor }\) as tied to classical logic in the manner envisaged in Remark 2.6, we could think of it more abstractly as a (partial) function mapping assigns any ⊢ with ∨ in its language to the set of pairs 〈A,B〉 for which ⊢ AB, forA,B formulas of that language. Here we note a similarity with at least one treatment of rules, namely that of 4.33 in [53], in which n-premiss rules are taken partial functions from languages L to sets of n + 1-tuples of formulas of L. Thus the rule of Modus Ponens (for a given implicational connective →) maps a language L with that connective in its vocabulary to {〈A,AB,B〉|A,BL}. (Here we put the conclusion of an application of the rule in the final position, and the ordering of the premiss positions is immaterial, so one could, less arbitrarily, regard the set of applications as a set of pairs whose first component is a multiset of formulas and whose second is a formula.) In fact [53] mainly considers sequent-to-sequent rules rather than formula-to-formula rules, but there is no need to bring that complication in here. There is already a slight further complication in that while emphasizing the dependence of the eventual set of tuples on the logic taken as argument, there is also a tacit sensitivity to the connective in question, so that one might more usefully think of the abstract rule in the case of Modus Ponens as mapping a logic and a binary connective # in its language to the set of all triples 〈A,A#B,B〉 so that we can say, for instance: intuitionistic logic satisfies Modus Ponens for ⇔ and for →. Further, the identification of the set of tuples—the set of applications of the rule in question, that is—depends only the language of the logic concerned rather than also on the logic itself, whereas in the logical relations case, it depends on the logic (the consequence relation) itself.

Concerning Example 2.16

The consequence relation ⊢min of Example 2.16 presents us with another simple case of multiple determination. It is a familiar fact that the minimum consequence relation on any language is determined by the class of all valuations (for that language). But it is also determined—more relevantly for Example 2.16—by the class of valuations falsifying exactly one formula. Let \(v_{\bar {A}}\) be the valuation such that for all formulas B, \(v_{\bar {A}}(B) = T\) iff B is not the formula A. (So \(\bar {A}\) is not-A is the sense of “other than A”, rather than the negation of A.) Ten if \({\Gamma } \nvdash _{\mathsf {min}} A\), \(v_{\bar {A}}\) verifies all formulas in Γ, but not A (since A∉Γ), giving us the determination result claimed. (More precisely: this gives us the ‘completeness’ half of the result—if every vV verifying all of Γ verifies A, then Γ ⊢minA—while the converse ‘soundness’ half is immediate from the fact that the current V is a subset of the set of all valuations and ⊢min is determined by the latter class.).

Reacting to Example 2.16

In the discussion after Example 2.16, it was remarked that if our finding there that on the transferred semantic conception of independence according to a consequence relation (Definition 2.13) we would have to say yes, p and q are not independent according to ⊢min. (In the preceding longer note we confirm this with the aid of the V there described.) Mention was made of the possibility of tweaking the definitions to avoid this verdict, should it be found intolerable. The prototype here is the treatment of the inconsistency of a Γ set of formulas relative to a consequence relation ⊢ in Shoesmith and Smiley [102] as Γ ⊢ A for every formula A (in the language of ⊢), which automatically makes the set of all formulas inconsistent according to ⊢ even if ⊢ is, for example the classical consequence relation restricted to the ∧-fragment (of ⊢CL, for definiteness), in which case the talk of inconsistency does not sit well. So [101], p. 615, calls a set of Γ formulas formally inconsistent if for every substitution s, s(Γ) (that is, {s(A)|A ∈Γ}) is inconsistent in the sense previously defined. Similarly we could say that formulas A,B are formal subcontraries according to ⊢ if for all substitutions s, s(A) and s(B) are subcontraries according to ⊢ as we have been understanding this (i.e., as explained in Example 2.12(i)), with a similar move in relevantly similar cases, and then to understand independence as not standing in proper logical relations in this ‘formally’ beefed up sense. In fact, since the issue will not arise again, we leave it to the reader’s discretion how to react to these special cases in which logical relations (including the Shoesmith–Smiley set-based inconsistency or contrariety example) apparently arise from considerations of cardinality and identity of formulas.

On Conjunctive Combination

The “⋅” notation is as in [55]; Humberstone [53] has “\(\vartriangle \)” for this (with “ + ” and “\(\triangledown \)”, respectively, for the dual operation of ‘disjunctive’ as opposed to ‘conjunctive’ combination of valuations). An alternative terminological route one could take for classes of valuations, instead of saying that A,B are subcontraries over V when for every vV, v(A) = T or v(B) = T, would be to call A,B subcontraries over V when for every vV there exist v1,v2V with v = v1v2, v1(A) = T and v2(B) = T. While this would make for some concise formulations, e.g., connecting the relation of being subcontraries according to ⊢ (defined in Example 2.12(i)) with being subcontraries over the class of valuations consistent with ⊢, it would be an unfamiliar and confusing way to talk about subcontrariety, for which reason this route is not taken here. One could make similar moves with the bivalent valuations induced by the Kripke semantics for intuitionistic and intermediate logics, in which case the upshot would resemble more closely the Beth semantics; comparative remarks on these various semantic treatments of disjunction can be found in the several subsections of §6.4 in Humberstone [53].

1.3 On Section 3

Pseudo-subcontrariety and Implicational Converses

Apropos of Example 3.4, we make the following observation. If we call formulas A,Bimplicit implicational converses according to a consequence relation ⊢ if they are respectively ⊢-equivalent to formulas which are explicit implicational converses (i.e., of the form CD and DC for some C,D), then formulas A,B are implicit implicational converses according to ⊢IL if and only if A,B are mutual pseudo-subcontraries according to ⊢IL (or equivalently: if each of A,B anticipates the other according to ⊢IL). The same goes for ⊢IL, and something similar also holds in the case of the (semi-)relevant logic RM, though for the precise details the reader is referred to Section 1 of [48] (where the notation “⊢RM” is used for an inference relation—see note 34 above—which is not actually a consequence relation).

Intermediate-Logical Postscript

Definition 7.3 calls formulas Wansing subcontraries (intuitionistic logic and its extensions) when their negations are contraries but both contrariety (understood as relating formulas when they have ⊥ as a consequent) and Wansing subcontrariety on the face of it have potentially interesting strengthenings which are one intuitionistically unavailable De Morgan step away from the negated conjunction formulations of these notions.

Definition 1

  1. (i)

    Formulas A,B are strong contraries according to ⊢⊇⊢IL if and only if ⊢¬A ∨¬B.

  2. (ii)

    Formulas A,B are strong Wansing subcontraries according to ⊢⊇⊢IL if and only if ⊢¬¬A ∨¬¬B.

Example 1

Any formula and its negation are strong contraries according to Dummett–Lemmon/Jankov intermediate logic KC, which is often presented axiomatically as the extension of IL by means of the schema—¬A ∨¬¬A (the ‘Weak Law of Excluded Middle’)—saying just that. (By the ‘Law of Triple Negation’—¬¬¬A equivalent to ¬A—it follows that A and ¬A are also strong Wansing subcontraries in KC.)

Both concepts share a feature which in the case of subcontrariety itself (as defined in Example 2.12(i))—alias, for ⊢IL itself, the intuitionistic version of the Lemmon logical relation \(\mathcal {R}^{\lor }\), which we have been calling degeneracy, though perhaps more generally is best described as the ‘monadic representability’ of the binary relation concerned ([54], p. 178), which for present purposes we may oversimplify to the following: two formulas’ standing the relation is a matter of one (or both of them) standing in some 1-ary logical relation. This is just the Disjunction Property at work again, but one can ask about how these concepts fare in intermediate logics without that property, just as one can ask about how subcontrariety itself fares, remarking for example, that Dummett’s intermediate logic LC (mentioned in note 29) is the smallest such logic in which an implication and its converse are subcontraries. Here we confine ourselves to the relation of strong contrariety, and so take a special interest in intermediate logics in which the intuitionistically “missing” De Morgan Law (according to which ¬C ∨¬D is a consequence of ¬(CD)) is still missing, since its presence would collapse contrariety and strong contrariety. Concerning such logics the question naturally arises as to whether there is any one of them in which formulas A and B can be found which are strong contraries but—non-degeneracy condition coming up—neither of them has a provable negation. Since the ‘missing’ De Morgan Law precisely axiomatizes (over IL) the logic KC mentioned above (also sometimes called De Morgan logic for this reason), the range of logics we are concerned with is precisely the set of intermediate logics which are not extensions of KC, and we need to isolate a restricted version of the Disjunction Property:

Definition 1

A logic S with ∨ and ¬ in its language has the Negative Disjunction Property iff for any formulas A,B for which we have ⊢S ¬A ∨ ¬B we have ⊢S¬A or ⊢S¬B.

Craig McKay has recently proved the following result:Footnote 2

Proposition 1

An intermediate logic S has the Negative Disjunction Property iff \(\mathsf {S}\!\nsupseteq \! \mathsf {KC}\) .

The “if” direction here returns a negative answer to our question, because if we have an intermediate logic in which strong contrariety differs from contrariety, it must lack the De Morgan law which would render these two equivalent and hence not be an extension of KC. So whenever ⊢S¬A ∨¬B—what we need for A and B to be strong contraries according to S, by the above Proposition—the negation of one or other of the two contraries would have to be provable. There is a similar ‘monadic collapse’ conclusion in the case of strong Wansing subcontraries: if S is to distinguish strong Wansing subcontrariety from Wansing subcontrariety simpliciter, it needs not to be an extension of KC in which case, whether or not S has the Disjunction Property, it has the Negative Disjunction Property so when A and B are strong Wansing subcontraries, one of ¬¬A, ¬¬B, will be provable outright, by the Proposition.

1.4 On Section 4

Proof of Proposition 4.4

We repeat the claim to be proved:

Proposition 4.3

Formulas A and B are independent over the class of Booleanvaluations if and only if they are de Jongh independent in inCL.

Proof

Suppose that A and B are formulas which are truth-value independent over the class of Boolean valuations. We want to show that they are de Jongh independent in CL. If the latter is not the case then there is a formula C(p,q) containing no variables beyond those indicated, with ⊢CLC(A,B) while \(\nvdash _{\mathsf {CL}} C(p,q)\). Since \(\nvdash _{\mathsf {CL}} C(p,q)\), there is a Boolean valuation v with v(C(p,q)) = F. Let v(p) = x and v(q) = y (x,y ∈{T,F}). But this implies that v(C(A,B)) = F for any Boolean valuation v with v(A) = x, v(B) = y; so there can be no such valuation. Thus A and B are not truth-value independent over the class of Boolean valuations after all, contradicting our initial assumption.

Conversely, suppose that A and B are de Jongh independent in CL but not independent over the class of Boolean valuations. This means that either there is no Boolean valuation assigning T to both A and B or else no valuation assigning T to A and F to B, or else no Boolean valuation assigning F to A and T to B, or else no Boolean valuation assigning F to each of A,B. Let us take the first case by way of illustration: there is no Boolean v with v(A) = v(B) = T. Then ⊢CL¬(AB), so we take C(p,q) as ¬(pq) and notice that while ⊢CLC(A,B), we do not have ⊢CLC(p,q), so A and B are not de Jongh independent. In the case in which there is no Boolean v with v(A) = T and v(B) = F, we choose C(p,q) as pq and argue similarly; likewise, mutatis mutandis, in the remaining two cases.□

The Warning Mentioned in Definition 4.6(iii)

We are concerned with the notations

$$ \vdash_\mathsf{S} \sigma \qquad\quad \text{and}\quad\qquad {\Gamma} \vdash_\mathsf{S} B, $$

as two ways of saying the same thing for the case in which . As was mentioned, there are dangers in this notational convention when one is dealing with a substructural logic, such as BCI, with reference to which case we illustrate the general issue here. In a typical sequent-based proof system for such logics—for example, a Gentzen-style sequent calculus (such as that in §5 of Kowalski and Humberstone [67])—the provability of will not imply the provability of Γ, , even though the hypothesis that Γ ⊢BCIBdoes guarantee that Γ,ABCIB. This is because the weakening or monotonicity condition—a structural condition built into the notion of a consequence relation—is not mirrored by a corresponding structural rule in the sequent calculus (see §5 of Kowalski and Humberstone [67]), lest this interfere with applications of the operational rules (in particular for the present case, the Right insertion rule for →). Thus the proposed equivalence of the two notation inset above should be avoided in the case of substructural S. Otherwise, where , and σ+ = Γ, , one is invited to reason:

$$\vdash_\mathsf{BCI} \sigma ~\Rightarrow ~{\Gamma} \vdash_\mathsf{BCI} B ~\Rightarrow ~{\Gamma}, A \vdash_\mathsf{BCI} B ~\Rightarrow ~\vdash_\mathsf{BCI} \sigma^{+},$$

where the starting point could be a correct claim, and the terminus an incorrect claim. (For instance, take Γ as empty, B as qq and A as p.) There is a further complication concealed in this discussion, as the reader has already perhaps noted, and that is that for the case we have been illustrating the danger with, BCI, the sequent calculus requires rules in which the left-hand side of the sequent is a multiset of formulas rather than a set (what [53] calls the logical framework Mset-Fmla as opposed to Set-Fmla). So the convention we recommend avoiding in a substructural setting because of the disastrous ⇒-chain above, is actually not quite coherent in view of the equivocation on what “Γ” stands for. This complication would not arise in the case of a proof-system, substructural in similarly disallowing left weakening, but in which provability is not affected by differences between multisets on the left where the set of formulas is the same, such as the ‘semi-relevant’ logic R-Mingle (or RM—see §29.3 and elsewhere in Anderson and Belnap [2]). In that case, however, although provability of sequents is not affected by changes in multiplicity on the left, what makes for a correct proof will be, and keeping the left-hand sides a multisets will require additional structural rules (of contraction and expansion, which respectively remove and insert duplicate formula occurrences on the left) for a system in the style of [67]. In fact, one could even adapt the BCI system there so as to avoid the left weakening problem by imposing a global derivational constraint to the effect that that rule (which subsumes expansion) can be applied, provided the application does not precede the application of any operational rule. (Contraction on the left would remain a problem for this approach, an approach which is in any case not really in keeping with the spirit of Gentzen systems.)

Elaboration of Note 36

As is mentioned in that note, a more directly parallel use in propositional logic of Marczewski-style independence, appears in Canty and Scharle [14], where, using Polish notation and having defined a certain 1-ary connective they write as Q, the authors remark that “It may be easily seen that XpQp = YpQp, where X and Y are arbitrary non-modal binary functors, implies that Xpq = Ypq. Hence as Xpq ranges over the sixteen non-modal binaries, XpQp ranges over the sixteen modal singularies of S5.” The authors do not use the terminology of independence here but this is close to Marczewski independence of (the equivalence classes of) p and Qp in the Boolean reduct of the Lindenbaum algebra of S5, since it says that if the truth-functions X and Y uniformly coincide in their values for p and Qp as arguments, they coincide on any two arguments. (We consider binary truth-functions because we are asking about the independence of two formulas.) Canty and Scharle’s further claim that their Q is (to within equivalence) the only candidate functor to have this effect for the cases of S5 is mistaken, as Massey [78] points out. The mistake is made vivid by looking at the Hasse diagram of the 16-element Boolean S5-equivalence classes of formulas in one sentence letter, as depicted in Figure 20.A1 at p. 605 of Humberstone [50] in which all four alternative additional free generators—additional to the equivalence class of the chosen sentence letter—can be seen on the middle row. There the sentence letter in question was q rather than p, and the result of applying Canty and Scharle’s operation Q to it is written as X(q). The other three candidates are Δq and ∇q, two common representations of “it is noncontingent whether q” and “it is contingent whether q”, and the negation of X(q) (appearing there in the equivalent form Xq)). Note that X(q), which appears as Qq in the notation of [14], amounts to q ⇔Δq. Further references on this topic can be found at the end of Remark 2.10.1 in [56].

Proof of Proposition 4.10

Beginning with its restatement:

Proposition 4.8

For any formulasA,Bin the language ofIL,the following two claims are equivalent:

  1. (1)

    A and B are head-linked according toIL;

  2. (2)

    (AB) → AILAand (BA) → BILB.

Proof

The (1) ⇒ (2) direction is a matter of checking that (2) holds when A and B are written as A0H and B0H. For (2) ⇒ (1): Suppose that we have (2) for a given A,B, and take A0, B0, and H as respectively AB, BA, and AB. One has then only to verify that A ⊣⊢ILA0H and B ⊣⊢ILB0H. □

1.5 On Section 5

Proof of Proposition 5.3

We begin with a statement of the claim to be proved:

Proposition 5.4

For the translationτfrom the language ofCLto the language of modal logic in which the only sentence letter to appear isp1, givenby:τ(pi) = □i− 1p1,τthe identity map on other compounds, we have for all non-modalformulas

$$ A_{1},\ldots,A_{n}, B: A_{1},\ldots,A_{n} \vdash_\mathsf{CL} B~\mathit{if~and~only~if}~\tau(A_{1}),\ldots,\tau(A_{n})\vdash_\mathsf{KD!} \tau(B). $$

Proof

‘Only if’: the result follows from the fact that ⊢KD! is a substitution-invariant extension of ⊢CL, putting “□i− 1p1” for pi.

‘If’: Suppose that \(A_{1},\ldots ,A_{n} \nvdash _{\mathsf {CL}} B\), so we have a Boolean valuation, v, say, verifying all of the Ai but not B. Using as a frame the set of positive integers with R-successors being the usual arithmetical (immediate) successors, construct a model verifying p1 at precisely those i for which v(pi) = T, thus making □i− 1p1 true at 1. This gives the basis case for an induction on the complexity of non-modal formulas C showing that the model in questionFootnote 3 verifies τ(C) at 1 iff v(C) = T, from which we infer that at 1 this model verifies τ(A1),…,τ(An) but not τ(B). Since the frame is functional, we conclude that \(\tau (A_{1}),\ldots ,\tau (A_{n})\nvdash _{\mathsf {KD!}} \tau (B)\).□

Burgess on Relevant Logic

Here is the passage from Burgess [13], alluded to in Section 5, on the system FDE of first-degree entailments of Anderson and Belnap (see [2], §15, or Dunn [26]), in which ‘r-logic’ means relevant/relevance logic (though here only formulas constructed with the aid of connectives ∧, ∨, and ¬ are under consideration):

Now for the account of entailments among truth-functional compounds given by relevance/relevant logic (henceforth r-logic for short). This part of r-logic is called the ‘first-degree’ fragment. Classically an entailment

(9):

(¬)p1 ∧… ∧ (¬)pm ⊢ (¬)q1 ∨… ∨ (¬)qn

can hold in either of two cases: (i) some sentence letter occurs on both sides with the same sign (either both plain or both negated), (ii) some sentence letter appears with both signs (both plain and negated) on the same side. Like perfectionistic logic,Footnote 4 relevance/relevant logic maintains that entailment holds only in the non-degenerate case (i).

A simple way to describe the relationship between classical and r-logical entailment in this case would be as follows. Introduce auxiliary sentence letters, so that each ordinary sentence letter p,q,r,… has an auxiliary sentence letter p,q,r,… as a mate. Replace each pi or qj that appears negated in (9) by its mate to obtain (9). Then (9) holds in r-logic iff (9) holds in classical logic.

This relationship can be extended to more complex cases. We define by recursion on complexity the distinction between the positive and the negative occurrences of a sentence letter in a formula. In an atomic formula, the sentence letter occurs positively. If § is conjunction or disjunction, the positive (respectively, negative) occurrences of a sentence letter in A§B are the positive (respectively, negative) occurrences in A and in B. The positive (respectively, negative) occurrences in ¬A are the negative (respectively, positive) occurrences in A. For any formula A involving ordinary sentence letters, let A be the result of replacing all negative occurrences of any sentence letter by an occurrence of its auxiliary mate. Then premises A1,…,An entail conclusion B according to r-logic iff \(A_{1}^{*},\ldots , A_{n}^{*}\) entail conclusion B according to classical logic.

This was not the r-logicians’ original definition—that was rather more complicated—but it is equivalent. It is easily seen to follow from the present definition that r-entailment is decidable. It also follows from the present definition and that r-entailment is transitive. On performing the substitution of mates, the argument from p to pq remains the classical valid argument from p to pq, since all occurrences of sentence letters are positive, so in disjunction introduction the premise r-entails the conclusion. On performing the substitution of mates, the argument from (pq) ∧¬p to q becomes the classically invalid argument from (pq) ∧¬p to q, so in disjunctive syllogism the premise does not r-entail the conclusion. We leave it to the reader to verify that (pq) r-entails p but that ¬(pq) ∧ p does not r-entail ¬q.

While there is no explicit criticism of relevant logic (and in particular of FDE) in this passage, there is perhaps some kind of implicit criticism. (Burgess has certainly not been slow to mount a critical attack on Anderson–Belnap style relevant logic elsewhere, including in other sections of Chapter 5 of [13], as well as in earlier publications such as [11] and [12]. It is also true that essentially the same point as is being construed as an adverse criticism here has been made by people more sympathetic to the relevant logical enterprise than Burgess: the mating arrangements between p,q,… and p,q,… above appear in this notation on p. 53 of Dunn [26], for example.)

The kind of point made by Burgess in the passage quoted above has certainly occurred to others. It was made (in fact in the variant form given in Remark 5.2(i)) in an unpublished 1987 undergraduate ‘Honours thesis’ by Michaelis Michael, then a student of the author (Michael [83], pp. 35–38), and explicitly discussed by him using the vocabulary of independence, as on p. 4:

Moreover I shall show that relevant negation in this first degree fragment has a rather disturbing characteristic: a negation is in a very strong sense logically independent from (one might be tempted to say irrelevant to) the unnegated sentence.

The reference to a first degree fragment is to FDE rather than FDF (see the discussion after Remarks 5.2). The parenthetical remark is a reminder that although Belnap’s variable-sharing criterion of relevance just requires a common variable in the antecedent and consequent of a provable implication, the way this is implemented in the Anderson–Belnap programme involves a shared negative occurrence of some variable or a shared positive occurrence, rather than mixing polarities (e.g., antecedent (p ∧¬p) ∧ q and consequent ¬qr).

Bivalent Supremacism?

The four-valued ‘Dunn–Belnap’ semantics for FDE or relevant logic more generally (the ‘American Plan’) was mentioned as a topic for consideration here. Here the truth-values are subsets of {T,F} and we use the conventional labelling of these subsets with t,f for {T},{F} respectively and b,n (“b oth, n either”) for {T,F} and \(\varnothing \) respectively, assuming familiarity with the algebra (operations for ∧,∨,¬) and the motivation for the choice of designated elements (namely t and b) which turns this algebra into a matrix.Footnote 5 Here and in general, we take the perspective explained in note 20, with matrix evaluations being homomorphisms from the propositional language concerned to the algebra of the matrix, such an evaluation h giving rise to a bivalent valuation vh as in that note. Now Definition 2.1 defined complete independence over a set V of valuations, but we have given no attention to the corresponding concept in connection with matrix evaluations, even though the set of sentence letters, to take the best known example, obviously enjoys the corresponding independence property in standard matrix methodology: any set of sentence letters can be assigned any set of values. Stating this for simplicity in the finite case, where a matrix \(\mathfrak {M} = \langle \boldsymbol {A}, \rangle \) has \(\underline {A}\) for the universe of its algebra A (and \(\underline {D} \subseteq \underline {A}\) is the set of designated elements), formulas A1,…,An have this stronger matrix independence property whenever for all \(a_{1},\ldots ,a_{n} \in \underline {A}\), there is an \(\mathfrak {M}\)-evaluation h with h(Ai) = ai (i = 1,…,n).Footnote 6 For the Dunn–Belnap choice of \(\mathfrak {M}\), and n = 2, we see that whereas pp were independent over \(V = \{v_{h} \vert h \text { is an } \mathfrak {M}\text {-evaluation}\}\), they are not \(\mathfrak {M}\)-independent in the sense just articulated: if h(p) = t, for instance, we cannot have hp) = t or hp) = b or hp) = n but can only have hp) = f. (Contrast the fact that we can have vhp) = T while vh(p) = T, since this is the case when h(p) = b.) That was the point of matrix semantics, indeed: to restore functionality at the level of the individual values in \(\underline {A}\) when it is absent at the macro-level of designation status (T,F-classification).Footnote 7 Thus the shift from the Australian to the American plan may been seen to threaten the case made in this section for claiming that p and ¬p should be regarded as logically independent in FDE.

However, the case set out for regarding p and ¬p as independent in the main body of Section 5, as well as in the longer note on Burgess above, attended, not to the recent choice of V, but to the fact that p stood in the same relation to ¬p (as far as ⊢FDE was concerned) as it did to a completely different sentence letter p (or to the negation of that new sentence letter). These new sentence letters are not visible in the notation with ¬ but, treating the new letters as suggested in Remark 5.2(i), once they are made explicit, the information that h(p) = t or that h(p) = b, for example, is really information to the effect that p along with p is being supposed true, or, that p is being supposed true while p is not, and similarly in the subdivision of the undesignated case (f and n). The treatment of p here leaves completely undetermined the treatment of p, which is reflected in the fact that the earlier V obeys no bans on the distribution of T and F to 〈pp〉.

This reply no doubt needs further polishing. In particular, there is no suggestion that the Dunn–Belnap matrix involves the kind of spurious proliferation of values exhibited by matrices which are not (what is most commonly now called) reduced. (A reduced matrix is one for which the only matrix congruence is the identity relation.Footnote 8) And there should be no suggestion that there is something wrong with the language of ⊢FDE having ¬ as one of its connectives. That is needed so that we can compare this consequence relation with its extensions—most famously ⊢CL and the ‘intermediate’ Kleene 3-valued and Priest’s ‘Logic of Paradox’ consequence relations, the latter two being determined respectively by the b-less and n-less submatrices of the Dunn–Belnap matrix. Rather, the point of the appeal to Burgess-like considerations is simply to bring out the relative independence of a formula and its negation as judged by their behaviour according to ⊢FDE. This is in conspicuous contrast to the case of the extensions just mentioned, in which a formula and its negation are contraries (according to ‘strong Kleene’) or subcontraries (according to LP), or contradictories (according to CL).Footnote 9 No doubt further light would be thrown on these issues by consideration of the less famous ‘super-Belnap’ logics—see Rivieccio [92]—though no steps will be taken in that direction here.

1.6 On Section 6

Postscript: One Non-problem for the de Jongh Account

The case of p and □p failing to be de Jongh independent according to ⊢K may arouse concern because it may seem that the result about non-independence for p and □p would apply to any formula containing p as a subformula, and this would contradict such cases as that of p and pq, which were noted in Example 2.2(iii) to be independent over the class of Boolean valuations, which supposedly makes them de Jongh independent according to ⊢CL—or at least so Proposition 4.4 says.Footnote 10 Thus the □p-and-p difficulty would spread across to the case of pq and p. That would then be a problem not just for de Jongh’s account of independence (envisaged as applying beyond the confines of IL) but for our presentation of that account, which would itself be inconsistent.

Accordingly, a certain interest attaches to the question of why we can’t reason as follows, much as in the case of □p above, and changing variables as there for convenience:

Take σ(p,q) as . This is not CL-provable but when we substitute pr for q what we get is , which of course isCL-provable: therefore p and pr are not de Jongh independent after all in CL.

What has gone wrong here?

The answer is that the hastily introduced ‘sequent context’ notation “σ(p,q)” has to be understood as subject to the same proviso as its ‘formula context’ prototype. Just as for “C(p,q)”, used to decide the independence of A and B by having us consider C(A,B), where C(p,q) was stipulated not to be constructed from any sentence letters other than the p and q explicitly indicated, so a sequent σ(p,q) needs to be understood as using only formulas constructed with p,q. (One can go back and look to see how this assumption is used for C(p,q) in the proof of Proposition 4.4.)

Much thus hangs, as far as the de Jongh account is concerned, on the difference between involving a further propositional variable (sentence letter) on the one hand, and involving a further connective, on the other. The KD! trade-off at work in Proposition 5.3 between iterating □ and choosing a new sentence letter might already make us feel a little uncomfortable about this, but perhaps the most obvious concern would arise over the treatment of ⊥ in minimal logic, which is a 0-place connective having there no logical behaviour other than that exhibited by a newly chosen sentence letter. Assuming one has no religious objections to such connectives,Footnote 11 they constitute another example of the anomalous dependence verdicts of the de Jongh account, this time as applied to 1-ary logical relations: we can find σ(p) which is not provable in minimal logic but for which σ(⊥) is, by taking σ(p) = p ⊥.

This attention to ⊥ prompts a question about negation, namely: whether p and ¬p are de Jongh independent in minimal logic. A replay of the argument from the main body of this section involving negation in FDE negation or □ in K would return the same instant negative verdict, but in this case the negative verdict is genuinely appropriate (though not on those grounds), since p and ¬p do stand in a non-trivial special coercive logical relation, the former anticipating the latter, or equivalently (for ⊢ML no less than for ⊢IL) has the latter as a pseudo-subcontrary, as these terms were introduced in the preamble to Example 3.4, and to which we return in Example 7.4(i), where the status of these as bona fide logical relations will be urged.

Returning to the case presenting a non-problem above, that of pq and p according to ⊢CL, where an apparent verdict of de Jongh dependence resulted from a failed application of the definition, we can ask how these formulas fare according to ⊢IL and we find ourselves with another case like that featuring in Example 2.15, where we saw that formulas could become independent in the transferred semantic sense according to a logic extending one according to which they were not independent. Here we have the same thing happening with de Jongh independence. Here the necessary background is given by the intuitionistic (and therefore also minimal, since ¬ and ⊥ are not involved) ‘law of triple equivalents’: the equivalence of with ((pq) ⇔ q) ⇔ q with pq. Letting σ(p,q) be we have σ(p,q) IL-unprovable but σ(pq,q) provable: so for intuitionistic logic, the present σ(p,q) does indeed induce a genuine logical dependence (or proper coercive logical relation) on the de Jongh approach, between a biconditional and one of its components, despite this dependence vanishing on the extension to classical logic, where the relation becomes universal and so no longer properly coercive (Examples 2.2 and Proposition 4.4).

In fact with this example we are very close to the territory covered in Proposition 4.10 and Remark 4.11, in view of the following:

Proposition 2

For any formulasA,B, the following are intuitionistically equivalent:

  1. (i)

    (AB) → A

  2. (ii)

    ((AB) → B) ∧ (BA)

  3. (iii)

    (AB) ⇔ B.

Here (ii) is included to make it easier to see the IL-equivalence of (i) with (iii). The IL-equivalence of (i) and (ii) is well known, and can be found in Lemma 2.3 of Wojtylak [119].

Just for the record, in fact Wojtylak is interested in the case in which something of the form (i) is IL provable, and thus, in effect, when ABILA, and he writes “BA” in this case, which is why we have reversed the direction in which the triangle points in “\(\mathcal {R}^{\triangleright }\)”, as we have been considering the converse. At p. 269 of [119], Wojtylak favours a reading of this (suggested by the literature on free topoi) as “A is high above B”. Here it is best to think of A, B as elements of a Heyting algebra, the equivalence classes, ∥AIL, ∥BIL in the Lindenbaum algebra of IL. (ii) decomposes this, for the case in which (i) is IL-provable, into the conjunction of the usual partial ordering ∥BIL ≤∥AIL—that called anticipation in the discussion after Definitions 3.3, lifted to a relation on the equivalence classes of formulas.

Now according to Proposition 4.10, formulas A,B are head-linked—can be rewritten equivalently as implications with a common consequent—according to IL iff they satisfy the two Peircean conditions:

$$(A \to B) \to A \vdash_\mathsf{IL} A \qquad \qquad (B \to A) \to B \vdash_\mathsf{IL} B.$$

The (unnumbered) Proposition above provides us with a further ‘properly coercive’ characterization of the relation of head-linkage, replacing the Peircean conditions with conditions allowing us to recover one component of a biconditional with the aid of the other; as in the Peircean case, the converse ⊢-statements are available for all A,B:

$$ (A \leftrightarrow B) \leftrightarrow B \vdash_\mathsf{IL} A \qquad \qquad (A \leftrightarrow B) \leftrightarrow A \vdash_\mathsf{IL} B$$

In the second case, we have kept “AB” intact on the left rather than splicing in “BA” verbatim.

1.7 On Section 7

Restricting the de Jongh Condition to Topoboolean Formulas

Since we are considering variations on the de Jongh account of independence, we include here a possible tightening of the proposal specific to its original setting, intuitionistic logic, and indeed most easily described in terms of the original formula-based rather than sequent-based presentation of that proposal. This, given in Definition 4.1 and Remarks 4.2, had us deeming A1,…,An to be independent in IL just in case for any formula C(p1,…,pn) with ⊢ILC(A1,…,An), we have ⊢ILC(p1,…,pn). If we replaced the references to IL with references to CL we would get something coextensive with Lemmon’s account of logical independence as a matter of not standing in any Lemmon relation \(\mathcal {R}^{\#}\) for any truth function # of the appropriate arity other than the constant-true truth-function (i.e., not standing in any non-universal \(\mathcal {R}^{\#}\)). But there is also another way de Jongh might have defined independence which would also have coincided, when applied in a classical setting, to Lemmon’s, but makes for far less dependence in IL than his own, as recalled above. We now have to be careful about the truth-functions and the connectives and cannot afford the casual use of “#” as a variable ranging indiscriminately over both, reserving it for connectives of the (common) language of intuitionistic and classical propositional logic. We use f# for the truth-function conventionally associated over Boolean valuations with the connective #. The version of the Kripke semantics for IL employed in this discussion will take models to be posets (V,≤) with V a set of (as usual) bivalent valuations and uv meaning that for all formulas A, if u(A) = T then v(A) = T.Footnote 12 For each primitive n-ary connective # of the language of IL the truth-function f# satisfies for all uV for any model (V,≤):

$$ u\left( \#(A_{1},\ldots,A_{n})\right) = T \Longleftrightarrow \forall v \geq u\cdot f_{\#}\left( v(A_{1}),\ldots,v(A_{n})\right) = T. $$
(*)

This makes all formulas in which a single primitive connective (from the range ∧,∨,¬,→,⊤,⇔,⊤,⊥—so n above will be 0, 1 or 2) is applies to n not necessarily distinct sentence letters a topoboolean formula in the sense of the following definition. The formula C(p1,…,pn), in which all sentences to occur are exhibited, is a topoboolean formula just in case there is some n-ary truth-function f such that for all uV for any model (V,≤):

$$ u\left( C(p_{1},\ldots,p_{n})\right) = T \Longleftrightarrow \forall v \geq u\cdot f\left( v(p_{1}),\ldots,v(p_{n})\right) = T. $$
(**)

The restriction we are now considering is quite different from that under discussion in the main body of Section 7, in that it properly restricts de Jongh’s own account of dependence as applied to IL itself. The restriction in question is that A1,…,An to be independent in IL just in case for any topoboolean formula C(p1,…,pn) with ⊢ILC(A1,…,An), we have ⊢ILC(p1,…,pn).Footnote 13

Why might this be a restriction worth considering? One reason is that it is a kind of ‘minimal mutilation’ of Lemmon’s account of the logical relations for classical logic. When we take the binary case, there will be again 16 pairwise non-IL-equivalent topoboolean formulas, since that is how many choices of f we have, all but one—the constant true function—of which engenders a proper coercive logical relation, uniformly prescribing how a compound’s truth-value is to depend on those of its components at all extensions of one’s current epistemic state, to use a common gloss on the role of ≤ in the Kripke semantics. How better to register dependence, after all, than by giving a function describing that dependence? And if it is truth-value dependence that is at issue, that function will be a truth-function. For this reason if a special name were wanted for independence as understood by the above restricted version of de Jongh’s account, we could do worse than call it alethic independence, acknowledging its legitimacy alongside (unrestricted) de Jongh independence. (The contrast is again reminiscent of that between assertoric content and ingredient sense, mentioned in note 68, the latter attending, as unrestricted de Jongh (in)dependence does, to logical distinctions arising at arbitrary depths of embedding.)

Since the poset with a single valuation in counts as a model (V,≤), in which the “∀vu” never takes anywhere else, we have the following useful negative test for being a topoboolean formula:

  • If C(p1,…,pn) and D(p1,…,pn) are classically but not intuitionistically equivalent formulas and C(p1,…,pn) is a topoboolean formula, then D(p1,…,pn) is not.

Applying this in the case of pq and ¬pq, for example, we conclude that the latter is not a topoboolean formula and therefore that the IL-provability of ¬AB does not constitute a coercive logical relation in its own right—at least (if we are being pluralistic here), not a coercive alethic logical relation. Since, though not equivalent to AB, however, it does imply the provability of AB and so indirectly counts against independence as currently envisaged,Footnote 14 since not every substitution instance of the topoboolean formula pq is IL-provable. But consider, then, a third implication-like relation, holding between A and B when A ∧¬B is IL-provable: a condition weaker rather than stronger than the topoboolean →-condition. Or again, we could use example of the relation of Wansing subcontrariety, concerning whose hallmark formula, ¬¬(pq), it seems, no topoboolean IL-consequence other than ⊤ (to within IL-equivalence).Footnote 15 In that case Wansing subcontraries can be alethically independent. Changing variables to avoid confusion, r and ¬rs might be such pair. (We cannot use r and ¬r since these are intuitionistic contraries, replacing p and q in the topoboolean marker formula ¬(pq).)

Interesting though the idea might be of a particularly tight kind of logical dependence consisting in a failure of alethic independence in the present sense, this discussion has been relegated to the longer notes because it is out of keeping with the general aims of our discussion. It is not only specifically oriented to (super)intuitionistic logic(s), but even explains its main concept—the topoboolean formulas—in terms of a specific semantics for that logic/range of logics. (For instance (*) would not be correct for the Beth semantics, taking # as ∨; Rousseau does observe in [94], though, that the notion of a topoboolean formula—as we are calling them for this very reason—is also naturally explained in the setting of the McKinsey–Tarski topological interpretation of IL.) And its applicability even to well-known enrichments of intuitionistic logic, such as adding strong negation (‘constructible falsity’), or dual intuitionistic implication and negation, is far from clear. Nevertheless, a comprehensive survey of promising notions of independence could hardly ignore it.

Indeed, the main body of our discussion has not ignored it: what we have here called alethic independence, for the sake of unveiling the news at this point—though the reader may well have seen this coming—is none other than independence in the transferred semantic sense, as applied to ⊢IL in Section 3 using Grygiel’s criterion Eq. 2.2 there.

On p. 473 of Rousseau [95], one reads the following (in which H—for “Heyting”—is IL):

In the proof of Theorem 2 it was shown that for each truth-function F the corresponding formula Fp1pn is equivalent to a formula A = A(p1,…,pn) of H, namely a conjunction of formulas of the form

(8):

\((p_{i_{l}} \land {\ldots } \land p_{i_{r}}) \to (p_{i_{r + 1}} \lor {\ldots } \lor p_{i_{n}})\)

(where 0 ≦ rn).

From the back-reference here to Theorem 2, it is clear that the formulas Fp1pn alluded to are exactly (to within IL-equivalence) what we have been calling the topoboolean formulas in the sentence letters p1,…,pn, of which sequence Rousseau’s \(p_{i_{l}},\ldots ,p_{i_{n}}\) in (8) is some permutation. (With no disjuncts in the consequent of (8), we understand the consequent as ⊥,Footnote 16 and in general, take ¬A as A →⊥ for present purposes; if there are no conjuncts in the antecedent, the implication schematically indicated is identified with its consequent, the conjunction of such implications when there are no conjuncts, we identity with ⊤.)

Now, the restriction on de Jongh’s context formulas we are currently considering is that for A1,…,An to be (‘alethically’) independent in IL we require that for any topoboolean formula C(p1,…,pn):

$$ \text{If}~\vdash_\mathsf{IL} C(A_{1},\ldots,A_{n}),~\text{then}~\vdash_\mathsf{IL} C(p_{1},\ldots,p_{n}). $$

Thus the basic coercive logical relations on this account are given by topoboolean formulas C(p1,…,pn) rather than (as on the original de Jongh account) by arbitrary such formulas, and the proper basic coercive cases are those for which \(\nvdash _{\mathsf {IL}} C(p_{1},\ldots ,p_{n})\). We can write such a formula in ‘Rousseau normal form’ as a conjunction \(C_{1} \land {\ldots } \land C_{2^{n}}\) of formulas of the form (8) in the quotation above. A representative case, with n = 3, is depicted in Table 1, with the rightmost column giving the various Ci(p1,p2,p3) for i = 1,…, 8. The middle column just substitutes schematic letters for the propositional variables as a half-way house en route to the Grygiel style condition in the leftmost column. The basic ternary coercive logical relations are thus (exactly as they would be on Lemmon’s account—Remark 2.6) given by the 28 sets of conditions from the first column, or the 28 conjunctions of (unprovable) hallmark formulas from the third column, all but one of which—the empty set of conditions, or the empty conjunction ⊤—is a proper basic ternary coercive logical relation.Footnote 17 For example, the combination/conjunction of lines two and three represents the logical relation: A2 and A3 are equivalent given A1, while combining lines two, three and four gives a representation of the ternary incarnation of what that has been called generalized equivalence; for the latter, see McKee [80]. (So we are considering the case of Eq. 2.2 in which Γ is {A1,A2,A3}, for the mutual independence of whose elements Eq. 2.2 demands that none of the conditions in the first column should obtain, taking ⊢ as ⊢IL.) Thus the Rousseau normal form representation of topoboolean formulas makes evident the equivalence of alethic independence and independence on the transferred semantic account, contrasting in this respect with, for instance, a characterization of the topoboolean formulas as those which are intuitionistically equivalent (taking the primitive connectives as ∧,∨,→,⊥ for a convenient formulation here) to a formula in which no occurrence of → lies within the scope of an occurrence of ∨ or within the antecedent scope of another occurrence of →.Footnote 18

Translation and Independence

We take up a point made by a referee for the present journal, mentioned before Example 7.2. The observation in question concerns the formulas □(□p →□q) and □(□q →□r) and whether they should be regarded as independent according to S4, which the referee urged (giving □ an epistemic interpretation: “John knows that”) they should indeed be. The question is how to reconcile this with the fact that they are the McKinsey–Tarski translations of the intuitionistic formulas pq and qr, which according to the de Jongh account as applied in its natural habitat, namely IL, they are not independent, since the double negation of their disjunction is IL-provable, while this is not so for an arbitrary of formulas. In the terminology of the present paper, these formulas are Wansing subcontraries according to ⊢IL. Similarly, as the McKinsey–Tarski translation advises us, the translation of this disjunction is S4-provable:

$$\Box\Diamond(\Box(\Box p \to \Box q) \lor \Box(\Box q \to \Box r)). $$

On the Restricted de Jongh account suggested here, the sequent σ(r,s)—we change variables to avoid confusion with the sentence letters in the above formula—, does not record a dependence (or proper coercive relation) between A = □(□p →□q) and B = □(□q →□r) even though \(\nvdash _{\mathsf {S4}} \sigma (r, s)\) while ⊢S4σ(A,B), because it is not eligible for ⊢S4 in view of the appearance of □ (a double appearance, taking ◊ to abbreviate ¬□¬), whose unamenability to unique characterization in all but the Post-complete normal modal logics has been emphasized. Thus, whatever the unrestricted de Jongh account may say, on the restricted de Jongh account the McKinsey–Tarski translation will not in general preserve dependence.

Additional Comments on Example 7.2

The consequence relation in play in Example 7.2 was singled out semantically as the global consequence relation determined by the class of reflexive frames. For all that was said, however, we could equally well have used the weaker condition of converse seriality (every point is accessible to some point)—cf. Observation 6.32.4 on p. 853 of [53]—instead of reflexivity as our frame condition. This was avoided because of the comparative unfamiliarity of the latter condition. The consequence relation would have differed, since we would now lose the fact that □pp was a consequence of \(\varnothing \): but this fact was nowhere appealed to. (Does the fact that it would have to be for a complete proof-system generating \(\vdash ^{\textit {glo}}_{\mathsf {KT}}\) spoil the claim of eligibility when uniqueness is taken in the weaker (‘to within equivalence’) sense, given this impurity—→ along with □? No: purity was only demanded of rules governing the connective whose unique characterization was at issue, and sufficing for such characterization. (This was mentioned at the end of the paragraph following Definitions 7.1.))

Table 3 Alethic independence = Semantically transferred independence for IL

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Humberstone, L. Explicating Logical Independence. J Philos Logic 49, 135–218 (2020). https://doi.org/10.1007/s10992-019-09516-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10992-019-09516-w

Keywords

Navigation