Skip to main content

Motivating the Causal Modeling Semantics of Counterfactuals, or, Why We Should Favor the Causal Modeling Semantics over the Possible-Worlds Semantics

  • Chapter
  • First Online:
Structural Analysis of Non-Classical Logics

Part of the book series: Logic in Asia: Studia Logica Library ((LIAA))

Abstract

Philosophers have long analyzed the truth-condition of counterfactual conditionals in terms of the possible-worlds semantics advanced by Lewis [13] and Stalnaker [23]. In this paper, I argue that, from the perspective of philosophical semantics, the causal modeling semantics proposed by Pearl [17] and others (e.g., Briggs [3]) is more plausible than the Lewis-Stalnaker possible-worlds semantics. I offer two reasons. First, the possible-worlds semantics has suffered from a specific type of counterexamples. While the causal modeling semantics can handle such examples with ease, the only way for the possible-worlds semantics to do so seems to cost it its distinctive status as a philosophical semantics. Second, the causal modeling semantics, but not the possible-worlds semantics, has the resources enough for accounting for both forward-tracking and backtracking counterfactual conditionals.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Throughout this paper, propositions (or events) are denoted by italics sentences.

  2. 2.

    The selection function was first introduced by Stalnaker [23]. I am using the notion in a broader sense.

  3. 3.

    There are other criticisms (cf., e.g., Pruss [2]). For simplicity’s sake, I will leave them aside.

  4. 4.

    Some might complain that cases like Power were illegitimate for involving supernatural power, or that counterfactuals with a physically impossible antecedent such as “Power \(>\) Wine” should receive a different semantic treatment. However, I see no inherent problem for counterfactuals involving supernatural power. Nor do I think that the difference between “Power \(>\) Wine” and, say, “Bet \(>\) Win” warrants different semantic treatments.

  5. 5.

    James Woodward has offered a counterexample to Lewis’ idea that avoiding big miracles is always more important than avoiding small miracles:

    Consider a simple example ... C is a deterministic direct (type) cause of E but also deterministically causes E indirectly by means of n causal routes that go through C\(_{1}\),..., C\(_\mathrm{n}\). Consider the counterfactual (1) “If C\(_{1}\),..., C\(_\mathrm{n}\) had not occurred, E would not have occurred.” (Woodward 2013, Endnote 4)

    Intuitively, (1) seems false, but the System S\('\) fails to give the correct verdict. Let w\(_{10}\) be the world in which C, C\(_{1}\),..., C\(_\mathrm{n}\), and E hold, w\(_{11}\) be the world in which, due to a small miracle, C does not hold, and C\(_{1}\),..., C\(_\mathrm{n}\), and E do not hold, and w\(_{12}\) be the world in which C holds, but due to a big miracle C\(_{1}\),..., C\(_\mathrm{n}\) do not hold, but E still holds.

    Suppose that C is within the immediate past of C\(_{1}\),..., C\(_\mathrm{n}\). That C is within the immediate past of C\(_\mathrm{i}\) means that C had to have obtained if C\(_\mathrm{i}\) were to obtain (as we will see in Sect. 5.4, Lewis allows backtracking counterfactualization in immediate past). It follows that, according to the S\('\)-possible-worlds semantics, w\(_{11}\) is more similar to w\(_{10}\) than w\(_{12}\) is, since w\(_{12 }\) contains a big miracle while w\(_{11}\) does not. Hence, (1) turns out to be true. Counterintuitive.

    Thanks for an anonymous reviewer for correcting a serious mistake in the original draft.

  6. 6.

    That the possible-worlds semantics fails to account for backtracking counterfactuals is the reason why the semantics also has difficulties dealing with backward counterfactuals (counterfactuals whose antecedent happens after its consequent) (cf. Northcott [16]) and backward causation (cf. Tooley 2002).

  7. 7.

    In fact, Mandarin does not even syntactically distinguish counterfactual conditionals from indicative conditionals.

  8. 8.

    The causal modeling semantics has been developed by Jude Pearl and many others (cf. Pearl [17]; also see Galles and Pearl [7]). The following formulation has been influenced by Briggs [3]. Hiddleston [10] has constructed a different type of causal modeling semantics. For more on Hiddleston’s account, see Footnote 23.

  9. 9.

    According to Ask, Jim will be mad at Jim if and only if they had a quarrel yesterday. We assume that none of the conditions sabotaging the if direction of the biconditional (such as Jack has suffered from amnesia) holds. Nor does any of the conditions sabotaging the only-if direction (such as Jack has a burst of anger) hold. The same goes for other structural equations. In Galles and Pearl’s [7] term, these conditions are called “inhibiting” and “triggering abnormalities” respectively. Implicit in each structural equation is the assumption that such abnormalities do not hold.

  10. 10.

    For the assignment function, cf. Hiddleston [10] and Briggs [3].

  11. 11.

    Calculation: QUARREL = 1 and PRIDE = 1 (by assumption). If QUARREL = 1, then MAD = 1 (by MAD \(\Leftarrow \) QUARREL). If QUARREL = 1 and PRIDE = 1, then ASK = 0 (by ASK \(\Leftarrow \) (\(\sim \)PRIDE \(\vee \) \(\sim \)QUARREL)). If MAD = 1, then HELP = 0 (by HELP \(\Leftarrow \) (ASK \(\wedge \) \(\sim \)MAD)).

  12. 12.

    Galles and Pearl’s [7] original semantics has limited expressive power. In particular, they consider only counterfactuals of the form “(A\(_{1}\) \(\wedge \) ... \(\wedge \) A\(_\mathrm{n}) >\) (C\(_{1}\) \(\wedge \) ... \(\wedge \) C\(_\mathrm{m})\)” while A\(_\mathrm{i}\) and C\(_\mathrm{j}\) have the form “A\(_\mathrm{i}\) = a\(_\mathrm{i}\)’ and ‘C\(_\mathrm{j}\) = c\(_\mathrm{j}\)” respectively. Halpern [8] has developed a semantics for “A \(>\) C” with A taking the form “A\(_{1}\) \(\wedge \) ... \(\wedge \) A\(_\mathrm{n}\)” (like Pearl’s), while C being any Boolean combination of sentences of the form “C\(_\mathrm{i}\) = c\(_\mathrm{i}\).” Briggs [3] further extends the semantics to deal with “A \(>\) C” with A to be any Boolean combination of sentences of the form “A\(_\mathrm{i }=_{ }\)a\(_\mathrm{i}\).” For simplicity’s sake, I will here focus on a language with less expressive power. That is, I will follow Pearl in assuming that the sentences involved in intervention (and extrapolation) consist only of conjunctions.

  13. 13.

    Thanks for an anonymous reviewer for pointing out some problems in the original formulation. Also see the definition of extrapolation below.

  14. 14.

    Calculation: QUARREL = 1 and PRIDE = 1 (by assumption). MAD = 0 (by Intervention). If QUARREL = 1 and PRIDE = 1, then ASK = 0 (by ASK \(\Leftarrow \) (\(\sim \)PRIDE \(\vee \) \(\sim \)QUARREL)). If ASK = 0, then HELP = 0 (by HELP \(\Leftarrow \) (ASK \(\wedge \) \(\sim \)MAD)).

  15. 15.

    Calculation: MAD = 0 (by extrapolation). PRIDE = 1 (by assumption). If MAD = 0, then QUARREL = 0 (by MAD \(\Leftarrow \) QUARREL). If QUARREL = 0, then ASK = 1 (by ASK \(\Leftarrow \) (\(\sim \)PRIDE \(\vee \) \(\sim \)QUARREL)). If MAD = 0 and ASK = 1, then HELP = 1 (by HELP \(\Leftarrow \) (ASK \(\wedge \) \(\sim \)MAD)).

  16. 16.

    This point was originally addressed in a footnote. Thanks for an anonymous reviewer for urging me to address it in the main text.

  17. 17.

    Calculation: X\(_{3}\) = 1 (by extrapolation). X\(_{1}\) = 1 (by assumption). If X\(_{3 }\)= 1 and X\(_{1}\) = 1, then X\(_{2}\) = 1 (by X\(_{3} \Leftarrow \sim \)X\(_{1}\) \(\vee \) X\(_{2})\). If X\(_{2}\) = 1, then X\(_{4}\) = 0 (by X\(_{4} \Leftarrow \sim \)X\(_{2}\) \(\wedge \) X\(_{3})\).

  18. 18.

    Calculation: X\(_{3}\) = 1 (by extrapolation). X\(_{2}\) = 0 (by assumption). If X\(_{3}\) = 1 and X\(_{2}\) = 0, then X\(_{1}\) = 0 (by X\(_{3} \Leftarrow \sim \)X\(_{1}\) \(\vee \) X\(_{2})\). If X\(_{2}\) = 0 and X\(_{3}\) = 1, then X\(_{4}\) = 1 (by X\(_{4} \Leftarrow \sim \)X\(_{2 }\) \(\wedge \) X\(_{3})\).

  19. 19.

    The term “relevant submodel,” suggested by an anonymous reviewer, is from Hiddleston [10]. Also see Hiddleston ([10], 650ff.) for a related discussion.

  20. 20.

    It is not necessary that the context always determines a unique submodel.

  21. 21.

    According to the aforementioned formulation, intervention will always determine a unique submodel. Intervention, hence, is vacuously context-sensitive, namely, different contexts will give rise to the same (set of) relevant submodels. However, the context-insensitivity of intervention may have more to do with the way intervention is formulated here than with the general notion of intervention. For instance, we have limited our attention to intervention involved conjunctions, i.e., (A\(_{1}\) \(\wedge \) ... \(\wedge \) A\(_{\mathrm{n}})\), since we only consider counterfactuals whose antecedents are of the form “A\(_{1}\) \(\wedge \) ... \(\wedge \) A\(_{\mathrm{n}}\).” Intervention of this specific sort determines a unique submodel. However, to intervene in a model with respect to a disjunction may fail to determine a unique submodel (cf. Briggs [3], 152ff.). Hence, the notion of relevant submodels will apply to intervention as well.

  22. 22.

    Hiddleston [10] has proposed a causal modeling semantics of counterfactuals that bears some similarities to CM\(_{\mathrm{EX}}\). There are two main differences between them, though. First, while the causal modeling semantics presented above takes structural equations to specify deterministic laws between a variable Y and its parents X’s (see Footnote 10), Hiddleston’s semantics takes structural equations to be indeterministic laws formulated in probabilistic terms. Second, Hiddleston’s semantics concerns only with positive causal influences, while CM\(_{\mathrm{EX}}\) takes into account both positive and negative causal influences.

    Let us call (X = x) has a direct positive influence on (Y = y) in a causal model M if the probability of (Y = y) is raised by (X = x) other things being equal. We call all the variables that have a direct positive influence on (Y = y) the positive parents of Y. Suppose that M\(^\prime \) is a submodel of M. If the value of Y in M\(^\prime \) is different from Y’s value in M, while Y’s positive parents’ values in M\(^\prime \) and M are the same, then we call that M\(^\prime \) contain a Causal Break. If Y’s values and Y’s positive parents’ values in M\(^\prime \) and M are the same, then we call that M\(^\prime \) contains a Causal Intact. According to Hiddleston’s semantics, very roughly, “A \(>\) C” is true in M iff for all submodels M\(^\prime \) such that A is true in M\(^\prime \) and that M\(^\prime \) contains the maximal amount of Causal Intacts and the minimal amount of Causal Breaks, C is also true in M\(^\prime \). Let us call that “A \(>\) C” is true in M in Hiddleston’s sense “A \(>\) C” is true in the Maximal-Intact-and-Minimal-Break M \(^\prime \).

    For the present purposes, it is worth pointing out that if a causal model M contains no probabilistic equations (i.e., Y’s parents raise the probability of Y getting the value y to 1), and if all Y’s parents X’s are positive parents, then being true in the Maximal-Intact-and-Minimal-Break M\(^\prime \) and being true\(_{\mathrm{EX}}\) in M converge. That is, in such limited cases, “A \(>\) C” is true in Maximal-Intact-and-Minimal-Break M\(^\prime \) iff “C” is true in M\(^{\mathrm{A }}\)(i.e., iff “A \(>\) C” is true\(_{\mathrm{EX}}\) in M).

    However, even in such cases, Hiddleston’s semantics and CM\(_{\mathrm{EX}}\) are still fundamentally different. First, Hiddleston’s semantics is supposed to be a complete semantics on its own. It does not admit the ambiguity of counterfactuals indicated by Ask . In particular, it does not allow the same counterfactual to have a forward-tracking as well as a backtracking interpretation. Hence, Hiddleston’s semantics faces the same problem as the possible-worlds semantics does.

    Second, Hiddleston’s semantics characterizes the truth-condition of counterfactuals in terms of the notion of being true in the Maximal-Intact-and-Minimal-Break M\(^\prime \). Now, we know that CM\(_{\mathrm{EX}}\)cannot account for cases of forward-tracking counterfactuals, which are best suit for CM\(_{\mathrm{IN}}\). Given that Hiddleston’s semantics basically is CM\(_{\mathrm{EX}}\) when no probabilistic equations are involved, it follows that the only way for Hiddleston’s semantics to explain forward-tracking counterfactuals, say, A \(>\) C, is to stipulate that A raises the probability of C to n, where n \(<\) 1. I think this approach will lead to some serious problems. But I will not pursue this line of thought here. What this shows is that Hiddleston’s semantics and the present account handle the truth-condition of counterfactuals very differently from each other.

    I would like to thank an anonymous reviewer for pushing me to elaborate this point.

  23. 23.

    For an elaboration, see Sloman ([21], Chap. 5).

  24. 24.

    Thanks for an anonymous reviewer for urging me to elaborate this point.

  25. 25.

    An anonymous reviewer also points out to me that the existence of M\(^{\mathrm{Ci=ci }}\) depends on C\(_{\mathrm{i}}\) = c\(_{\mathrm{i}}\) being compatible with the set of structural equations S of M, while the existence of M\(_{\mathrm{Ci=ci}}\) is not so constrained. This feature is worth exploring, but I will not carry out the task here.

  26. 26.

    Calculation: QUARREL = 1 and PRIDE = 1 (by assumption). ASK = 1 (by intervention). If QUARREL = 1, then MAD = 1 (by MAD \(\Leftarrow \) QUARREL). If MAD = 1, then HELP = 0 (HELP \(\Leftarrow \) (ASK \(\wedge \) \(\sim \)MAD)).

  27. 27.

    Calculation: ASK = 1 (by extrapolation). PRIDE = 1 (by assumption). If ASK = 1 and PRIDE = 1, then QUARREL = 0 (by ASK \(\Leftarrow \) (\(\sim \)PRIDE \(\vee \) \(\sim \)QUARREL)). If QUAREL = 0, then MAD = 0 (by MAD \(\Leftarrow \) QUARREL). If MAD = 0 and ASK = 1, then HELP = 1 (by HELP \(\Leftarrow \) (ASK \(\wedge \) \(\sim \)MAD)). However, acute readers may notice that the calculation above has held (PRIDE = 1) fixed. It is by doing so that we deduce HELP = 1. Suppose that we hold (QUARREL = 1) fixed instead. We would then get the opposite result: if QUARREL = 1, then MAD = 1 (by MAD \(\Leftarrow \) QUARREL). If MAD = 1 and ASK = 1, then HELP = 0 (by HELP \(\Leftarrow \) (ASK \(\wedge \) \(\sim \)MAD)).

    As noted, counterfactualization\(_{\mathrm{EX}}\) is context-sensitive; to extrapolate a causal model with respect to (C\(_{\mathrm{i}}\) = c\(_{\mathrm{i}})\) needs to hold something fixed, and what should be held fixed is always a matter determined by the context.

    The idea that extrapolation is context-sensitive is quite intuitive in this case, as counterfactualization\(_{\mathrm{EX}}\) is context-sensitive in a parallel way. For instance, there are two ways to counterfactualize\(_{\mathrm{EX}}\) what would have happened if Jim were to ask Jack for help. On the one hand, if Jim were to ask Jack for help, it must be that Jim had somehow swallowed his pride, since they had had a quarrel yesterday, and if Jim did not swallow his pride, he would not have asked Jack for help. On the other hand, if Jim were to ask Jack for help, it must be that Jim was not mad at him, since Jim was a prideful person, who would not ask Jack for help after quarreling with him. Both are legitimate counterfactualization \(_{\mathrm{EX}}\), and only the context could tell which one is to be adopted.

  28. 28.

    Calculation: PUSH = 0 (by assumption). If PUSH = 0, then SIGNAL = 0 (by SIGNAL \(\Leftarrow \) PUSH). If SIGNAL = 0, then BOX = 1 (by BOX \(\Leftarrow \sim \)SIGNAL). If SIGNAL = 0, then DESTROY = 0 (by DESTROY \(\Leftarrow \) SIGNAL).

  29. 29.

    Calculation: PUSH = 1 (by intervention). If PUSH = 1, then SIGNAL = 1 (by SIGNAL \(\Leftarrow \) PUSH). If SIGNAL = 1, then BOX = 0 (by BOX \(\Leftarrow \sim \)SIGNAL). If SIGNAL = 1, then DESTROY = 1 (by DESTROY \(\Leftarrow \) SIGNAL).

  30. 30.

    Notice that given that PUSH is an exogenous variable, to intervene in B with respect to (PUSH = 1) is tantamount to extrapolating B with respect to (PUSH = 1). That is, B\(_{\mathrm{(PUSH=1) }}\)is identical to B\(^{\mathrm{(PUSH=1)}}\). It follows that “PUSH = 1 \(>\) DESTROY = 1” is also true\(_{\mathrm{EX}}\) in B.

    That B\(_{\mathrm{(PUSH=1) }}\)is identical to B\(^{\mathrm{(PUSH=1)}}\) should not be surprising given that PUSH is an exogenous variable. The difference between intervention and extrapolation consists in that the latter, but not the former, allows the values of PUSH’s parents be subject to change. Since PUSH has no parents, B\(_{\mathrm{(PUSH=1) }}\) and B\(^{\mathrm{(PUSH=1) }}\) naturally converge. Also see the end of Sect. 5.5.

  31. 31.

    This part was omitted in the original draft. Thanks for an anonymous reviewer for urging me to put in it in the main text.

  32. 32.

    Calculation: BET = 0 (by assumption). HEADS = 1 (by assumption). If BET = 0, then WIN = 0 (by WIN \(\Leftarrow \) (HEADS \(\wedge \) BET)).

  33. 33.

    An explanation of Bet may not need to assign indeterministic (probabilistic) causal connections among variables. But one may wonder whether in some other cases the causal connections among variables should be characterized in probabilistic terms. The present account, however, does not allow such characterization, as we have implicitly assumed that what Galles and Pearl call “inhibiting” and “triggering abnormalities” do not hold (see Footnote 9). This line of thought assumes that indeterministic relationships between events are the result of our ignorance. While this assumption may not square well with quantum physics, it does fit well with our ordinary notion of causation (also see Pearl [17], 26–7).

  34. 34.

    Calculation: HEADS = 1 (by assumption). BET = 1 (by intervention). If HEADS = 1 and BET = 1, then WIN = 1 (by WIN \(\Leftarrow \) (HEADS \(\wedge \) BET)).

  35. 35.

    Since BET is an exogenous variable, being true\(_{\mathrm{IN }}\)in T is tantamount to being true\(_{\mathrm{EX }}\)in T. Also see the end of Sect. 5.5.

References

  1. Bennett, J.: Counterfactuals and temporal direction. Philos. Rev. 93(1), 57–91 (1984)

    Article  Google Scholar 

  2. Bennett, J.: A Philosophical Guide to Conditionals. Clarendon Press, Oxford (2003)

    Book  Google Scholar 

  3. Briggs, R.: Interventionist counterfactuals. Philos. Stud. 160(1), 139–166 (2012)

    Article  Google Scholar 

  4. Downing, P.B.: Subjunctive conditionals, time order, and causation. Proc. Aristotelian Soc. 59(January), 125–140 (1958)

    Google Scholar 

  5. Edgington, D.: Counterfactuals and the benefit of hindsight. In: Dowe, P., Noordhof, P. (eds.) Cause and Chance: Causation in an Indeterministic World, pp. 12–27. Routledge, New York (2004)

    Google Scholar 

  6. Fine, K.: Critical notice to Lewis (1973). Mind 84(1), 451–458 (1975)

    Article  Google Scholar 

  7. Galles, D., Pearl, J.: An axiomatic characterization of causal counterfactuals. Found. Sci. 3(1), 151–182 (1998)

    Article  Google Scholar 

  8. Halpern, J.Y.: Axiomatizing causal reasoning. J. Artif. Intell. Res. 12(1), 317–337 (2000)

    Google Scholar 

  9. Hawthorne, J.: Chance and counterfactuals. Philos. Phenomenol. Res. LXX 2, 396–405 (2005)

    Article  Google Scholar 

  10. Hiddleston, E.: A causal theory of counterfactuals. Noûs 39(4), 632–657 (2005)

    Article  Google Scholar 

  11. Hitchock, C.: The intransitivity of causation revealed in equations and graphs. J. Philos. 98(6), 273–299 (2001)

    Article  Google Scholar 

  12. Kahneman, D.: Thinking: Fast and Slow. Farrar, Straus and Giroux, New York (2011)

    Google Scholar 

  13. Lewis, D.: Counterfactuals. Blackwell, Malden (1973)

    Google Scholar 

  14. Lewis, D.: Counterfactual dependence and time’s arrow. Noûs 13(4), 455–476 (1979)

    Article  Google Scholar 

  15. Lewis, D.: Postcripts to ‘Counterfactual dependence and time’s arrow’. In: Philosophical papers II, 52–66. Oxford University Press, Oxford (1986)

    Google Scholar 

  16. Northcott, R.: On Lewis, Schaffer and the non-reductive evaluation of counterfactuals. Theoria 75(4), 336–343 (2009)

    Article  Google Scholar 

  17. Pearl, J.: Causality: Models, Reasoning, and Inference. Cambridge University Press, Cambridge (2000)

    Google Scholar 

  18. Pearl, J.: Reasoning with cause and effect. AI Magazine 23(1), 95–111 (2002)

    Google Scholar 

  19. Pruss, A.R.: David Lewis’s counterfactual arrow of time. Noûs 37(4), 606–637 (2003)

    Article  Google Scholar 

  20. Schaffer, J.: Counterfactuals, causal independence and conceptual circularity. Analysis 64(4), 299–309 (2004)

    Article  Google Scholar 

  21. Sloman, S.A.: Causal Models: How People Think about the World and Its Alternatives. Oxford University Press, Oxford (2009)

    Google Scholar 

  22. Slote, M.A.: Time in counterfactuals. Philos. Rev. 87(1), 3–27 (1978)

    Article  Google Scholar 

  23. Stalnaker, R.: A theory of conditional. In: Harper, W.L., Stalnaker, R., Pearce, G. (eds.) Ifs: Conditionals, Belief, Decision, Chance, and Time, pp. 41–55. D. Reidel Publishing Company, Boston (1968)

    Chapter  Google Scholar 

  24. Tooley, M.: Backward causation and the Stalnaker-Lewis approach to counterfactuals. Analysis 62(3), 191–197 (2002)

    Article  Google Scholar 

  25. Wasserman, R.: The future similarity objection tevisited. Synthese 150(1), 57–67 (2006)

    Article  Google Scholar 

  26. Woodward, J.: Causation and manipulability. In: Zalta, E.N. (ed.) The stanford encyclopedia of philosophy(Winter 2013 Edition). http://plato.stanford.edu/archives/win2013/entries/causation-mani/

Download references

Acknowledgments

I am grateful to two anonymous reviewers for helpful comments. Specifically, one reviewer has given me invaluable suggestions and corrections, which greatly improve the original draft as well as inspire my thoughts on the issues. I also want to thank Daniel Marshall for helpful comments and proofreading of an earlier draft. I am also indebted to the participants of the Taiwan Philosophical Logic Colloquium in 2014 for comments and discussions. The present work has received funding from the Ministry of Science and Technology (MOST) of Taiwan (R.O.C.) (MOST 103-2410-H-194-125).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kok Yong Lee .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Lee, K.Y. (2016). Motivating the Causal Modeling Semantics of Counterfactuals, or, Why We Should Favor the Causal Modeling Semantics over the Possible-Worlds Semantics. In: Yang, SM., Deng, DM., Lin, H. (eds) Structural Analysis of Non-Classical Logics. Logic in Asia: Studia Logica Library. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-48357-2_5

Download citation

Publish with us

Policies and ethics