Confirmation, increase in probability, and the likelihood ratio measure: A reply to Glass and McCartney William Roche Department of Philosophy, Texas Christian University, Fort Worth, TX, USA, e-mail: w.roche@tcu.edu ABSTRACT: Bayesian confirmation theory is rife with confirmation measures. Zalabardo (2009) focuses on the probability difference measure, the probability ratio measure, the likelihood difference measure, and the likelihood ratio measure. He argues that the likelihood ratio measure is adequate but each of the other three measures is not. He argues for this by setting out three adequacy conditions on confirmation measures and arguing in effect that all of them are met by the likelihood ratio measure but not by any of the other three measures. Glass and McCartney (2015), hereafter "G&M", accept the conclusion of Zalabardo's argument along with each of the premises in it. They nonetheless try to improve on Zalabardo's argument by replacing his third adequacy condition with a weaker condition. They do this because of a worry to the effect that Zalabardo's third adequacy condition runs counter to the idea behind his first adequacy condition. G&M have in mind confirmation in the sense of increase in probability: the degree to which E confirms H is a matter of the degree to which E increases H's probability. I call this sense of confirmation "IP". I set out four ways of precisifying IP. I call them "IP1", "IP2", "IP3", and "IP4". Each of them is based on the assumption that the degree to which E increases H's probability is a matter of the distance between p(H | E) and a certain other probability involving H. I then evaluate G&M's argument (with a minor fix) in light of them. KEYWORDS: Bayesian confirmation theory; confirmation; Glass and McCartney; increase in probability; likelihood ratio measure; Zalabardo 1 Introduction Bayesian confirmation theory is rife with confirmation measures.1 Below are four: c1(H ,E) = p(H | E)− p(H ) 1 See Roche and Shogenji (2014) for a list of the main confirmation measures in the literature. See Roche (2015) for an expanded list. 2 c2 (H ,E) = p(H | E) p(H ) c3(H ,E) = p(E |H )− p(E |¬H ) c4 (H ,E) = p(E |H ) p(E |¬H ) c1 is the "probability difference" measure. c2 is the "probability ratio" measure. c3 is the "likelihood difference" measure. c4 is the "likelihood ratio" measure. They are alike in that on each of them there is a neutral value n such that the degree to which E confirms H is greater than n if and only if p(H | E) > p(H).2 But no two of them are ordinally equivalent to each other.3 This is prima facie problematic. Each of c1-c4 is plausible prima facie. But certain results in Bayesian confirmation theory involving some such measures fail to carry over to at least some of the others. This is the so-called "problem of measure sensitivity" (see Brössel 2013 and Fitelson 1999). Zalabardo (2009) argues that c4 is adequate but c1, c2, and c3 are not. He argues for this by setting out three adequacy conditions on confirmation measures and arguing in effect that all of them are met by c4 but not by c1, c2 or c3.4 Glass and McCartney (2015), hereafter "G&M", accept the conclusion of Zalabardo's argument along with each of the premises in it. They nonetheless try to improve on Zalabardo's argument by replacing his third adequacy condition with a weaker condition.5 2 The neutral value for c1 and c3 is 0. The neutral value for c2 and c4 is 1. 3 Let c and c* be confirmation measures. Then c and c* are ordinally equivalent to each other if and only if the following holds for all ordered pairs of propositions <H1, E1> and <H2, E2>: c(H1, E1) > / = / < c(H2, E2) if and only if c*(H1, E1) > / = / < c*(H2, E2). 4 I say "in effect" because Zalabardo does not argue explicitly that c4 meets each of his three adequacy conditions. He argues explicitly that (i) c2 but not c1 meets his first adequacy condition, (ii) c4 but not c3 meets his second adequacy condition, and (iii) c4 but not c2 meets his third adequacy condition. I take it to be a tacit premise in his argument, though, that c4 meets his first adequacy condition, since without that premise (or a premise or premises entailing it) it would not follow that c4 is adequate but c1, c2, and c3 are not. 5 G&M focus on logarithmic versions of c2 and c4. This difference matters not at all in this discussion. 3 They do this because of a worry to the effect that Zalabardo's third adequacy condition runs counter to the idea behind his first adequacy condition.6 G&M's argument is worthy of careful study. If it succeeds, then it thereby solves the problem of measure sensitivity. Is it the case, though, that it succeeds? I find it difficult at this point to answer this question. The reason why is not that it is unclear whether G&M have in mind confirmation in the sense of increase in probability: Increase in Probability (IP) The degree to which E confirms H is a matter of the degree to which E increases H's probability. It is clear from the following passage that the answer is affirmative: While it [viz., Adequacy Condition 4.0 below in Section 2] is suitable if confirmation is to be considered as a generalization of entailment, it is not so clear that it must be accepted if confirmation is to be considered in terms of the more general notion of evidential support as discussed at the start of this paper, i.e. quantifying the extent to which the hypothesis is made more probable by the evidence. (G&M 2015, pp. 62-63, emphasis added) The reason why, rather, is that IP is somewhat vague and can be precisified in different rather natural ways. I want to focus on four ways in particular of precisifying IP. I call them "IP1", "IP2", "IP3", and "IP4". Each of them is based on the assumption that the degree to which E increases H's probability is a matter of the distance between p(H | E) and a certain other probability involving H. The expression "increase in probability" would be a misnomer if this assumption were false. I shall assume a pluralistic approach on which there is no need to choose between IP1, IP2, IP3, and IP4. They all capture an important respect in which E can be related qua evidence to H and thus all have a place in Bayesian confirmation theory.7 This approach should be welcome to G&M. Suppose, for example, that G&M's argument is unsound in the context of each of IP1, IP2, and IP3. Suppose, in other words, that it follows from each of IP1, IP2, and IP3 that G&M's argument is unsound. Then, given 6 See Iranzo and Martinez de Lejarza (2013) and Roche (2016) for additional worries concerning Zalabardo's argument. 7 A pluralistic approach is taken in Hajek and Joyce (2008) and Joyce (1999, Ch. 6, sec. 6.4, 2008). A very different approach-a monistic approach-is taken in Milne (1996). 4 my pluralistic approach, it might be that this is unproblematic for G&M's argument because, at the same time, G&M's argument is sound in the context of IP4. Is G&M's argument sound in the context of any of IP1, IP2, IP3, and IP4? I aim to show that the answer is negative. I aim to show, that is, that it follows from each of IP1, IP2, IP3, and IP4 that G&M's argument is unsound. The remainder of the paper is organized as follows. In Section 2, I explain G&M's argument. I call it "GMA". In Section 3, I note a minor error in GMA and suggest a fix. I call the resulting argument "GMA*". In Section 4, I set out IP1, IP2, IP3, and IP4 and argue that c4 is inadequate in the context of each of IP1, IP2, IP3, and IP4. I argue, that is, that it follows from each of IP1, IP2, IP3, and IP4 that c4 is inadequate. The upshot is that any valid argument in support of c4-for example GMA*-is unsound in the context of each IP1, IP2, IP3, and IP4. In Section 5, I conclude. 2 G&M's Argument (GMA) It will help to start with Zalabardo's argument. I explain it in Subsection 2.1 and then turn to GMA in Subsection 2.2. 2.1 Zalabardo Zalabardo's first adequacy condition concerns a case given in Schlesinger (1995). Zalabardo introduces the case as follows: Schlesinger asks us to compare two scenarios. In the first, we consider a type of aircraft which is regarded as extremely safe, with a 1/109 probability of crashing in a single flight. However, further inspection of the structure of the aircraft reveals a flaw as a result of which the probability of one of these planes crashing is actually 1/100. The second scenario concerns troops landing gliders behind enemy lines. We start from the assumption that someone taking part in one of these operations has a 26% chance of perishing, but one day the commander announces that owing to peculiar weather conditions the risk has increased from 26% to 27%. (Zalabardo 2009, pp. 631-632) Zalabardo then claims in agreement with Schlesinger that the degree of confirmation in the first scenario is greater than the degree of confirmation in the second scenario: As Schlesinger argues, the degree to which the inspection of the aircraft confirms the hypothesis of a plane crash is intuitively much higher than the degree to which the 5 unusual weather conditions confirm the hypothesis of a glider mission resulting in death. (Zalabardo 2009, p. 632) Thus the condition: Adequacy Condition 1.0 (AC1.0) Let E1 be the evidence in the first scenario, H1 be the hypothesis in the first scenario, E2 be the evidence in the second scenario, and H2 be the hypothesis in the second scenario, so that p(H1 | E1) = 0.01 > 0.000000001 = p(H1) whereas p(H2 | E2) = 0.27 > 0.26 = p(H2). Then c(H1, E1) > c(H2, E2). This is Zalabardo's first adequacy condition.8 Since c1 fails to meet AC1.0, he rejects c1 as inadequate. Here, next, are Zalabardo's second and third adequacy conditions: Adequacy Condition 2.0 (AC2.0) p(H | E1) > p(H | E2) if and only if c(H, E1) > c(H, E2). Adequacy Condition 3.0 (AC3.0) If (i) p(E1 | H1) = p(E2 | H2) and (ii) p(E1 | ¬H1) < p(E2 | ¬H2), then c(H1, E1) > c(H2, E2). Since c2 fails to meet AC3.0, he rejects c2 as inadequate. And since c3 fails to meet AC2.0, he rejects c3 as inadequate. Zalabardo holds that c4, unlike c1, c2, and c3, meets each of AC1.0, AC2.0, and AC3.0.9 He concludes that c4 is adequate but c1, c2, and c3 are not. The argument in full can be put as follows: Zalabardo's Argument (ZA) (1) Any adequate confirmation measure should meet each of AC1.0, AC2.0, and AC3.0. (2) AC1.0, AC2.0, and AC3.0 are all met by c4 but not by c1, c2, or c3 (as c1 fails to meet AC1.0, c2 fails to meet AC3.0, and c3 fails to meet AC2.0). (3) It is not the case that each of c1-c4 is inadequate. 8 Zalabardo, though, does not refer to it as his first adequacy condition. When he speaks of his first adequacy condition, he has in mind AC3.2 below. This is a mere terminological matter however. There is no doubt that he holds that any adequate confirmation measure should meet AC1.0. Similar comments are in order with respect to AC2.0 and AC3.0 below. 9 Zalabardo never explicitly claims that c4 meets AC1.0. I take it, though, that he holds that c4 meets AC1.0, since otherwise he would be in no position to accept c4 as adequate. 6 Thus (4) c4 is adequate but c1, c2, and c3 are not. Zalabardo never explicitly puts forward (3). But without it (or a premise or premises entailing it) ZA would be invalid, for (1) and (2) leave it open that there are adequacy conditions on confirmation measures in addition to AC1.0, AC2.0, and AC3.0 and that c4 fails to meet at least one of them. 2.2 G&M G&M raise a worry to the effect that AC3.0 runs counter to (or is in tension with) the idea behind AC1.0. They write (where here and in the passages below some notation has been modified): Neither Schlesinger nor Zalabardo try to formulate a criterion based on this example, but in the Original Schlesinger Scenarios, the idea seems to be that it is the much larger relative increase in probability that warrants the greater degree of confirmation in one case than in the other. Consider now a pair of scenarios, which for reasons that will become apparent, we will call Modified Schlesinger Scenarios. ... In the first scenario, let p(E1 | H1) = 19/20, p(E1 | ¬H1) = 1/40 and p(H1) = 2/3, and in the second, let p(E2 | H2) = 19/20, p(E2 | ¬H2) = 1/30 and p(H2) = 1/10. According to AC3.0, the confirmation should be greater in the former case than in the latter .... However, in conflict with AC3.0, the intuition underlying Schlesinger's argument suggests that the degree of confirmation should be much greater in the latter case since it has a much larger relative increase in probability (an increase from 1/10 to 19/25 compared to an increase from 2/3 to 76/77 in the former case). (G&M 2015, p. 62, emphasis original) This worry carries over to c4 since, as noted by G&M, c4(H1, E1) = 38 > 28.5 = c4(H2, E2) in the modified version of Schlesinger's case.10 Now consider a fourth condition: Adequacy Condition 4.0 (AC4.0) If E entails H, then c(H, E) is maximal. If E entails ¬H, then c(H, E) is minimal. 10 The same is true, of course, with respect to c3. Things are different with c1 and c2. It follows both by c1 and by c2 that the degree to which E1 confirms H1 is less than the degree to which E2 confirms H2; c1(H1, E1) ≈ 0.320 < 0.66 = c1(H2, E2) and c2(H1, E1) ≈ 1.481 < 7.6 = c2(H2, E2). 7 G&M note that c4 meets AC4.0 and then argue that this helps in answering the worry at issue. They write: AC4.0 makes sense if confirmation is to be understood as a generalization of logical entailment as is appropriate in the context of inductive logic. Equating 'E entails H' with p(H | E) = 1, it is reasonable that in certain cases, such as the confirmation of H1 by E1 in the Modified Schlesinger Scenarios, the degree of confirmation should be high even though the prior probability was high to start with. Thus, although Schlesinger's argument based on the Original Schlesinger Scenarios is very plausible, care must be taken if it is to be applied more generally, and there is no clear reason to think that it can be extended in such a way as to pose a problem for c4. (G&M 2015, p. 62) They follow up this passage with a cautionary note to the effect that they are not insisting on AC4.0 as an adequacy condition on confirmation measures and with some brief remarks on the situation as they see it. They write: Having said that, it is not clear that AC4.0 should be adopted as an adequacy criterion for confirmation measures. While it is suitable if confirmation is to be considered as a generalization of entailment, it is not so clear that it must be accepted if confirmation is to be considered in terms of the more general notion of evidential support as discussed at the start of this paper, i.e. quantifying the extent to which the hypothesis is made more probable by the evidence. And the same point applies to AC3.0. While it is far from clear that measures satisfying AC3.0 should be rejected, given its tension with Schlesinger's argument, a more convincing reason would need to be provided to adopt it as an adequacy criterion. Instead, however, an alternative criterion will be proposed in the following section. (G&M 2015, pp. 62-63) What, then, is their alternative to AC3.0?11 11 There is a bit of an exegetical puzzle here. Each of (a), (b), and (c) below has some prima facie plausibility (to say the least): (a) G&M claim in effect in the last sentence in the first displayed passage in this subsection that in the modified version of Schlesinger's case H2's increase in probability due to E2 from 1/10 to 19/25 is greater than H1's increase in probability due to E1 from 2/3 to 76/77. (b) G&M claim in the same paragraph from which that passage is taken that by c4 it follows that in the modified version of Schlesinger's case the degree to which E1 confirms H1 is greater than the degree to which E2 confirms H2. 8 It can be put as follows: Adequacy Condition 3.1 (AC3.1) If (i) p(E | H1) = p(E | H2) and (ii) p(E | ¬H1) < p(E | ¬H2), then c(H1, E) > c(H2, E). This condition's antecedent is stronger than AC3.0's antecedent, since with AC3.1's antecedent but not with AC3.0's antecedent the evidence proposition for H1 and the evidence proposition for H2 need to be the same. Given this, and given that AC3.1's consequent is identical to AC3.0's consequent, it follows that AC3.1 is weaker than AC3.0. Does AC3.1 run counter to the idea behind AC1.0? G&M answer in the negative. They write: Furthermore, there is no obvious tension between AC3.1 and the Original Schlesinger Scenarios or Modified Schlesinger Scenarios. The reason for this is that these scenarios relate to cases where p(H1 | E1)/p(H1) ≠ p(H2 | E2)/p(H2), which cannot arise if p(E | H1) = p(E | H2). (G&M 2015, p. 63) The point here is that AC3.1, unlike AC3.0, has no application in Schlesinger's case or the modified version of it and thus does not run counter to the idea behind AC1.0. G&M's argument differs from Zalabardo's argument not just in that in the former AC3.0 is replaced by AC3.1 but also in two further respects. The first is that in G&M's argument AC2.0 is replaced by a weaker condition: (c) G&M claim in effect in the second sentence in the third displayed passage in this subsection that the sense of confirmation at issue is confirmation in the sense of increase in probability (H's increase in probability due to E). It seems to follow from (a), (b), and (c), though, that: (d) G&M should reject c4 on the grounds that it issues the wrong verdict in the modified version of Schlesinger's case. But, of course, G&M do not reject c4 on the grounds that it issues the wrong verdict in the modified version of Schlesinger's case. What gives? I read G&M as follows. The claim referred to in (c) should be understood in terms of IP unprecisified. The claim referred to in (a), in contrast, should be understood in terms of IP precisified along the lines of IP1 below. If G&M are thus read, then (d) does not follow from (a), (b), and (c) and the puzzle is thus resolved. 9 Adequacy Condition 2.1 (AC2.1) If p(H | E1) > p(H | E2), then c(H, E1) > c(H, E2). The second is that in G&M's argument the set of confirmation measures under consideration is expanded to include the following: c5 (H ,E) = p(H | E)− p(H ) 1− p(H ) if p(H | E) ≥ p(H ) p(H | E)− p(H ) p(H ) if p(H | E) < p(H ) ⎧ ⎨ ⎪ ⎪ ⎩ ⎪ ⎪ c6 (H ,E) = p(H | E)− p(H |¬E) They reject c5 on the grounds that it fails to meet AC3.1 and reject c6 on the grounds that it fails to meet AC2.1.12,13 G&M's argument, then, can be put as follows: G&M's Argument (GMA) (1) Any adequate confirmation measure should meet each of AC1.0, AC2.1, and AC3.1. (2) AC1.0, AC2.1, and AC3.1 are all met by c4 but not by c1, c2, c3, c5, or c6 (as c1 fails to meet AC1.0, c2 fails to meet AC3.1, c3 fails to meet AC2.1, c5 fails to meet AC3.1, and c6 fails to meet AC2.1). (3) It is not the case that each of c1-c6 is inadequate. Thus (4) c4 is adequate but c1, c2, c3, c5, and c6 are not. G&M never explicitly put forward (3). But without it (or a premise or premises entailing it) GMA would be invalid. The first and second premises, after all, leave it open that there are adequacy conditions on confirmation measures in addition to AC1.0, AC2.1, and AC3.1 and that c4 fails to meet at least one of them. 12 c5 meets AC3.1 in the special case where p(E | H1) > p(E | ¬H1) and p(E | H2) > p(E | ¬H2) but not in the special case where p(E | H1) < p(E | ¬H1) and p(E | H2) < p(E | ¬H2). 13 There is no mention of c6 in ZA. But ZA could be modified so as to include mention of c6. Zalabardo (2009, p. 632, fn. 5) notes in a footnote that c6, as with c3, fails to meet AC2.0. 10 3 A fly in the ointment G&M provide no argument for the thesis that c4 meets AC3.1. They simply claim that it is clear that it does. Are they right in this claim? It will help here to consider Zalabardo's argument for the thesis that c4 meets AC3.0. Zalabardo argues for this thesis by first arguing that c4 meets the following: Adequacy Condition 3.2 (AC3.2) If (i) p(E1 | H) = p(E2 | H) and (ii) p(E1 | ¬H) < p(E2 | ¬H), then c(H, E1) > c(H, E2). He writes (where some notation has been modified): Clearly, treating c4 as our measure of confirmation would satisfy AC3.2, since if p(E1 | H) = p(E2 | H) and p(E1 | ¬H) < p(E2 | ¬H), c4(H, E1) and c4(H, E2) have the same numerator, but c4(H, E1) has a smaller denominator than c4(H, E2) does. (Zalabardo 2009, p. 633) He later claims in effect that this reasoning carries over to c4 and AC3.0. It might seem that he is right in all this and that his reasoning can be adapted to show that c4 meets AC3.1. There is a fly in the ointment however. Zalabardo's reasoning with respect to c4 and AC3.2 fails. Suppose, as is possible, that: p(E1 | H) = 0 = p(E2 | H) p(E1 | ¬H) = 0.01 < 0.02 = p(E2 | ¬H) Then c4(H, E1) = 0 = c4(H, E2). Hence c4 fails to meet AC3.2. The same is true with respect to c4 and AC3.1. There are cases where: p(E | H1) = 0 = p(E | H2) p(E | ¬H1) = 0.01 < 0.02 = p(E | ¬H2) All such cases are cases where c4(H1, E) = 0 = c4(H2, E). Hence c4 fails to meet AC3.1. Hence (2) in GMA is false. Hence GMA is unsound. 11 It will not help to revert back to AC3.0 and ZA. Since AC3.1 is weaker than AC3.0, it follows that any measure failing to meet AC3.1 also fails to meet AC3.0. So, given that c4 fails to meet AC3.1, it follows that c4 also fails to meet AC3.0.14 There is an easy fix. G&M can simply replace AC3.1 in their argument with the following slightly weaker condition: Adequacy Condition 3.3 (AC3.3) If (i) p(E | H1) = p(E | H2) > 0 and (ii) p(E | ¬H1) < p(E | ¬H2), then c(H1, E) > c(H2, E). This condition is met by c4 but not by c2.15 Consider, then, the following slight variant of GMA: G&M's Argument* (GMA*) (1) Any adequate confirmation measure should meet each of AC1.0, AC2.1, and AC3.3. (2) AC1.0, AC2.1, and AC3.3 are all met by c4 but not by c1, c2, c3, c5, or c6 (as c1 fails to meet AC1.0, c2 fails to meet AC3.3, c3 fails to meet AC2.1, c5 fails to meet AC3.3, and c6 fails to meet AC2.1). (3) It is not the case that each of c1-c6 is inadequate. Thus (4) c4 is adequate but c1, c2, c3, c5, and c6 are not. This argument is like GMA except that here the second premise is true. Is it the case, though, that the argument is otherwise unproblematic? I turn now to the task of answering this question. 4 Is G&M's Argument* (GMA*) sound? The main aim in this section is to show that c4 is inadequate in the context of each of IP1, IP2, IP3, and IP4. I set out IP1, IP2, IP3, and IP4 in Subsection 4.1. I consider c4 in the context of IP1 in Subsection 4.2, c4 in the context of IP2 in Subsection 4.3, c4 in the context of IP3 in Subsection 4.4, and c4 in the context of IP4 in Subsection 4.5. I draw a general lesson in Subsection 4.6. 14 Hence (2) in ZA is false. Hence ZA, as with GMA, is unsound. 15 There is a worry to the effect that the move from AC3.1 to AC3.3 is ad hoc. G&M would need to answer this worry if they were to go with my suggested fix. 12 4.1 IP1, IP2, IP3, and IP4 Recall from Section 1 that each of IP1, IP2, IP3, and IP4 is based on the assumption that the degree to which E increases H's probability is a matter of the distance between p(H | E) and a certain other probability involving H. What is the certain other probability in question? And how is the distance between p(H | E) and it to be understood? c4(H, E) is formulated above in Section 1 in terms of p(E | H) and p(E | ¬H). But there is no necessity in this, for it can instead be formulated in terms of p(H | E) and p(H): c4 (H ,E) = p(E |H ) p(E |¬H ) = p(H | E) p(¬H | E) p(H ) p(¬H ) = p(H | E) 1− p(H | E) p(H ) 1− p(H ) This opens up the possibility that c4(H, E) is adequate qua measure of the distance between p(H | E) and p(H). And this, in turn, together with the assumption that the degree to which E increases H's probability is a matter of the distance between p(H | E) and a certain other probability involving H, opens up the possibility that c4(H, E) is adequate qua measure of the degree to which E increases H's probability. A natural answer to the first of the two questions raised above in the paragraph immediately before the prior paragraph is that the certain other probability in question is p(H). An alternative answer is that the certain other probability in question is p(H | ¬E). Suppose that: p(H1 | E1) = 0.99 > 0.98 = p(H1) > 0.97 = p(H1 | ¬E1) p(H2 | E2) = 0.99 > 0.98 = p(H2) > 0.01 = p(H2 | ¬E2) By the first answer it follows that the degree to which E1 increases H1's probability is the same as the degree to which E2 increases H2's probability. By the second answer, in contrast, it follows that the degree to which E1 increases H1's probability is not the same as the degree to which E2 increases H2's probability. I am not the first to note that the degree to which E increases H's probability can be understood in terms of the distance between p(H | E) and p(H | ¬E). Joyce (1999, Ch. 6, sec. 6.4), for one, notes this (see also Christensen 1999, Hajek and Joyce 2008, and Joyce 2008). He also notes an interesting difference between c1 and c6. The difference is that on c1 the degree to which E can increase H's probability approaches 0 as p(E) approaches 1 whereas this is not the case on c6. So on c6 but not on c1 the degree to which E increases H's probability can be high even if p(E) is itself high. 13 Joyce (1999, Ch. 6, sec. 6.4) stresses that there is no need to choose between c1 and c6. He writes (where notation has been modified and where he speaks in terms of "evidential relevance" as opposed to "increase in probability"): Many philosophers have defended c1 as the correct measure of evidential relevance. c6 has had fewer champions. My view is that there is no choice to be made here; it is not as if one of these measures is right and the other wrong. The c1 and c6 measures capture contrasting, but equally legitimate, ways of thinking about evidential relevance, which serve somewhat different purposes. (Joyce 1999, p. 206) I agree. This is part of what I had in mind above in Section 1 when I said that IP1, IP2, IP3, and IP4 all capture an important respect in which E can be related qua evidence to H. Return now to the question of how distance-the distance between p(H | E) and the certain other probability in question-is to be understood. Suppose, for definiteness, that the two probabilities at issue are p(H | E) and p(H). A natural answer is that the distance between p(H | E) and p(H) is given by the absolute distance between them. An alternative answer is that the distance between p(H | E) and p(H) is given by the absolute distance between them relative to (or as compared to) the absolute distance between them in the special case where H is entailed by E. Suppose that: p(H1 | E1) = 1 > 0.01 = p(H1) p(H2 | E2) = 1 > 0.99 = p(H2) The absolute distance between p(H1 | E1) and p(H1) is greater than the absolute distance between p(H2 | E2) and p(H2). But the absolute distance between p(H1 | E1) and p(H1) relative to the absolute distance between them in the special case where H1 is entailed by E1 is the same as the absolute distance between p(H2 | E2) and p(H2) relative to the absolute distance between them in the special case where H2 is entailed by E2. This is because in each case the absolute distance between the hypothesis's posterior and prior probabilities is identical to the absolute distance between them in the special case where the hypothesis is entailed by the evidence. Thus, though the absolute distances are different, the relative distances are the same. It should be noted that c5 is put forward by Crupi and Tentori (2013, 2014) as a measure of relative distance. They write (where notation has been modified): To appreciate this conceptual unity, note that in case of positive inductive support or confirmation c5(H, E) expresses the relative reduction of the initial distance from certainty of H being true as yielded by E, i.e., it measures how far upward the posterior p(H | E) has gone in covering the distance between the prior p(H) and 1. Similarly, in the 14 case of negative inductive support or disconfirmation, c5(H, E) reflects the relative reduction of the initial distance from certainty of H being false as yielded by E, i.e., it measures how far downward the posterior p(H | E) has gone in covering the distance between the prior p(H) and 0. Accordingly, c5(H, E) measures the extent to which the initial probability distance from certainty concerning the truth (falsehood) of H is reduced by the confirming (disconfirming) statement E. Or, put otherwise, how much of such distance is "covered" by the upward (downward) jump from p(H) to p(H | E). Thus, c5(H, E) is a measure of the relative reduction of the distance from certainty that a conclusion/hypothesis of interest is true or false-or, with a slight abuse of language, a relative distance measure. (Crupi and Tentori 2013, p. 366, emphasis original) There is no explicit mention in this passage of partial entailment. But c5 is also put forward by Crupi and Tentori (2013, 2014) as a measure of partial entailment.16 The view, I take it, is that c5 is adequate qua measure of the relative distance between p(H | E) and p(H) and thus is adequate qua measure of the degree to which H is partially entailed by E. Consider now the following variant of c5: c7 (H ,E) = p(H | E)− p(H |¬E) 1− p(H |¬E) if p(H | E) ≥ p(H |¬E) p(H | E)− p(H |¬E) p(H |¬E) if p(H | E) < p(H |¬E) ⎧ ⎨ ⎪ ⎪ ⎩ ⎪ ⎪ This measure is like c5 except that here the probabilities at issue are p(H | E) and p(H | ¬E) as opposed to p(H | E) and p(H). It can thus be understood as a measure of the relative distance between p(H | E) and p(H | ¬E). It will help to be a bit clearer about how exactly absolute distance and relative distance differ from each other. Here, first, is a partial characterization of absolute distance: Absolute Distance (AD) ADa: If A = A* > B = B*, then the absolute distance between A and B is the same as the absolute distance between A* and B*. ADb: If A > A* > B = B*, then the absolute distance between A and B is greater than the absolute distance between A* and B*. ADc: If A = A* > B* > B, then the absolute distance between A and B is greater than the absolute distance between A* and B*. 16 The title of Crupi and Tentori (2013) is "Confirmation as partial entailment: A representation theorem in inductive logic". 15 Here, second, is a partial characterization of relative distance: Relative Distance (RD) RDa: If A = A* > B = B*, then the relative distance between A and B is the same as the relative distance between A* and B*. RDb: If A > A* > B = B*, then the relative distance between A and B is greater than the relative distance between A* and B*. RDc: If 1 > A = A* > B* > B, then the relative distance between A and B is greater than the relative distance between A* and B*. RDd: The relative distance between A and B is maximal if and only if A = 1 > B. Suppose that: A = A* = 1 > B* > B By ADc it follows that the absolute distance between A and B is greater than the absolute distance between A* and B*. By RDd, in contrast, it follows that the relative distance between A and B is the same as the relative distance between A* and B*. I can now set out IP1, IP2, IP3, and IP4. They are: Increase in Probability 1 (IP1) IP1a: c(H, E) is a matter of the absolute distance between p(H | E) and p(H). IP1b: If p(H1 | E1) = p(H2 | E2) > p(H1) = p(H2), then c(H1, E1) = c(H2, E2). IP1c: If p(H1 | E1) > p(H2 | E2) > p(H1) = p(H2), then c(H1, E1) > c(H2, E2). IP1d: If p(H1 | E1) = p(H2 | E2) > p(H2) > p(H1), then c(H1, E1) > c(H2, E2). Increase in Probability 2 (IP2) IP2a: c(H, E) is a matter of the absolute distance between p(H | E) and p(H | ¬E). IP2b: If p(H1 | E1) = p(H2 | E2) > p(H1 | ¬E1) = p(H2 | ¬E2), then c(H1, E1) = c(H2, E2). IP2c: If p(H1 | E1) > p(H2 | E2) > p(H1 | ¬E1) = p(H2 | ¬E2), then c(H1, E1) > c(H2, E2). IP2d: If p(H1 | E1) = p(H2 | E2) > p(H2 | ¬E2) > p(H1 | ¬E1), then c(H1, E1) > c(H2, E2). Increase in Probability 3 (IP3) IP3a: c(H, E) is a matter of the relative distance between p(H | E) and p(H). IP3b: If p(H1 | E1) = p(H2 | E2) > p(H1) = p(H2), then c(H1, E1) = c(H2, E2). IP3c: If p(H1 | E1) > p(H2 | E2) > p(H1) = p(H2), then c(H1, E1) > c(H2, E2). IP3d: If 1 > p(H1 | E1) = p(H2 | E2) > p(H2) > p(H1), then c(H1, E1) > c(H2, E2). IP3e: c(H, E) is maximal if and only if p(H | E) = 1 > p(H). 16 Increase in Probability 4 (IP4) IP4a: c(H, E) is a matter of the relative distance between p(H | E) and p(H | ¬E). IP4b: If p(H1 | E1) = p(H2 | E2) > p(H1 | ¬E1) = p(H2 | ¬E2), then c(H1, E1) = c(H2, E2). IP4c: If p(H1 | E1) > p(H2 | E2) > p(H1 | ¬E1) = p(H2 | ¬E2), then c(H1, E1) > c(H2, E2). IP4d: If 1 > p(H1 | E1) = p(H2 | E2) > p(H2 | ¬E2) > p(H1 | ¬E1), then c(H1, E1) > c(H2, E2). IP4e: c(H, E) is maximal if and only if p(H | E) = 1 > p(H | ¬E). IP1b-IP1d follow from IP1a together with ADa-ADc. IP2b-IP2d follow from IP2a together with ADa-ADc. And so on. No two of IP1, IP2, IP3, and IP4 agree on all cases (see Appendix A for details). The task now is to evaluate c4 in light of IP1, IP2, IP3, and IP4. 4.2 Is c4 adequate in the context of IP1? Suppose that H1 is entailed by E1 and that H2 is entailed by E2 so that: p(H1 | E1) = 1 = p(H2 | E2) Suppose further that: p(H1) = 0.01 < 0.99 = p(H2) It follows by IP1d that c(H1, E1) > c(H2, E2). But, since c4(H, E) is maximal at ∞ if H is entailed by E, it follows that c4(H1, E1) = c4(H2, E2).17 Hence c4 runs counter to IP1d. Hence c4 is inadequate in the context of IP1.18 17 I am assuming, as is standard, that c4 should be understood so that if H is entailed by E and thus Pr(E | ¬H) = 0, then c4(H, E) = ∞. See G&M (2015, p. 62, n. 4) and Iranzo and Martinez de Lejarza (2013, sec. 3) for relevant discussion. 18 It is straightforward to show that c1 and c2 meet each of IP1b-IP1d and that c3, c5, c6, and c7 do not. It is also straightforward to show that c1 but not c2 meets the following generalization of IP1d: IP1d*: If p(H1 | E1) = p(H2 | E2) and p(H2) > p(H1), then c(H1, E1) > c(H2, E2). Suppose, for example that: p(H1 | E1) = 0 = p(H2 | E2) 17 4.3 Is c4 adequate in the context of IP2? Suppose that H1 is entailed by E1 and that H2 is entailed by E2 so that: p(H1 | E1) = 1 = p(H2 | E2) Suppose further that: p(H1 | ¬E1) = 0.01 < 0.99 = p(H2 | ¬E2) It follows by IP2d that c(H1, E1) > c(H2, E2). But, since c4(H, E) is maximal at ∞ if H is entailed by E, it follows that c4(H1, E1) = c4(H2, E2). Hence c4 runs counter to IP2d. Hence c4 is inadequate in the context of IP2.19 4.4 Is c4 adequate in the context of IP3? The situation is a bit different with respect to c4 and IP3. It turns out that c4 meets each of IP3b-IP3e. This can be seen in two main steps. First, note that: (1) c4 (H1,E1) = p(E1 |H1) p(E1 |¬H1) = p(H1 | E1) p(¬H1 | E1) p(H1) p(¬H1) (2) c4 (H2,E2 ) = p(E2 |H2 ) p(E2 |¬H2 ) = p(H2 | E2 ) p(¬H2 | E2 ) p(H2 ) p(¬H2 ) Second, note that: p(H1) = 0.01 < 0.02 = p(H2) Then c1(H1, E1) = -0.01 > -0.02 = c2(H2, E2) whereas c2(H1, E1) = 0 = c2(H2, E2). It seems clear that IP1 should be modified so as to include IP1d*, for prior probabilities always matter when the issue is absolute distance. It thus seems clear that c2 is inadequate in the context of IP1. 19 It is straightforward to show that c6 meets each of IP2b-IP2d and that c1, c2, c5, c6, and c7 do not. 18 (3) c4(H | E) is maximal at ∞ if and only if p(H | E) = 1 > p(H). It follows from (1) and (2) that c4 meets each of IP3b-IP3d. It follows from (3), in turn, that c4 meets IP3e. Does this mean that c4 is plausible given IP3? No, because it could be that certain conditions in addition to IP3b-IP3e follow from or are plausible given IP3a and RDa-RDd, and it could be that c4 fails to meet at least some such additional conditions. Consider the following schemas where β ≥ 1, τ = 99 9800 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ + 1 9800 ⎛ ⎝⎜ ⎞ ⎠⎟ β + 1 99 ⎛ ⎝⎜ ⎞ ⎠⎟ β + 2( )β , and τ * = 1 811008 ⎛ ⎝⎜ ⎞ ⎠⎟ β + 1 8192 ⎛ ⎝⎜ ⎞ ⎠⎟ β + 1 2097152 ⎛ ⎝⎜ ⎞ ⎠⎟ β + 4( )β : Schema 1 Schema 2 E H p E H p T T 99 9800 ⎛ ⎝⎜ ⎞ ⎠⎟ τ T T 1 811008 ⎛ ⎝⎜ ⎞ ⎠⎟ β τ* T F 1 9800 ⎛ ⎝⎜ ⎞ ⎠⎟ β τ T F 1 8192 ⎛ ⎝⎜ ⎞ ⎠⎟ β τ* F T 1 99 ⎛ ⎝⎜ ⎞ ⎠⎟ β τ F T 1 2097152 ⎛ ⎝⎜ ⎞ ⎠⎟ β τ* F F 2( ) β τ F F 4( )β τ* These schemas are alike in that on all instances of each of them it follows that (see Appendix B.1 for details): (4) p(H | E) > p(H) They are nonetheless quite different. Schema 1 is such that (see Appendix B.2 for details): (5) p(H | E) = 0.99 > 0.01 = p(H) if β = 1 (6) p(H | E) approaches 1 as β approaches ∞ (7) p(H) approaches 0 as β approaches ∞ Schema 2, in contrast, is such that (see Appendix B.3 for details): (8) p(H | E) = 0.01 > 0.0000004 ≈ p(H) if β = 1 19 (9) p(H | E) approaches 0 as β approaches ∞ (10) p(H) approaches 0 as β approaches ∞ It seems clear that IP3 should be modified to include each of the following conditions: IP3f: Schema 1 is such that c(H, E) approaches the maximal value as β approaches ∞. IP3g: It is not the case that Schema 2 is such that c(H, E) approaches the maximal value as β approaches ∞. IP3h: It is not the case that c(H, E) is less on Schema 1 than on Schema 2 given any admissible value for β. Why? Because IP3 implies that c(H, E) is maximal if and only if p(H | E) = 1 > p(H), because Schema 1 is such that E gets arbitrarily close to entailing (or behaving as if it entails) H as β gets arbitrarily close to ∞, and because Schema 2, in contrast, is not such that E gets arbitrarily close to entailing (or behaving as if it entails) H as β gets arbitrarily close to ∞, indeed, Schema 2 is such that E gets arbitrarily close to entailing (or behaving as if it entails) ¬H as β gets arbitrarily close to ∞. Does c4 meet IP3f-IP3h? It turns out that the answer is negative, since c4 meets IP3f but fails to meet IP3g or IP3h. This follows from the fact that (see Appendix B.4 for details): (11) c4(H, E) approaches ∞ on Schema 1 as β approaches ∞. (12) c4(H, E) approaches ∞ on Schema 2 as β approaches ∞. (13) c4(H, E) is less on Schema 1 than on Schema 2 given any admissible value for β. Hence c4 is inadequate in the context of IP3 (understood so as to include IP3f-IP3h). It might seem that I am being too quick here. For, it might seem that IP3f-IP3h are less than obvious upon reflection. There are lots of examples, the idea goes, where p(H | E) gets smaller and yet the relative distance between p(H | E) and p(H) gets larger. Suppose, for example, that: Case 1: p(H | E) = 0.5 > 0.49 = p(H) Case 2: p(H | E) = 0.49 > 0.01 = p(H) It follows by c4 that c(H, E) is less in Case 1 than in Case 2. But this, the idea goes, seems exactly right in the context of IP3. I agree that there are lots of examples where p(H | E) gets smaller and yet the relative distance between p(H | E) and p(H) gets larger. But IP3f-IP3h do not imply (separately or 20 together) otherwise. c5 meets each of IP3f-IP3h. This follows from the fact that (see Appendix B.5 for details): (14) c5(H, E) approaches ∞ on Schema 1 as β approaches ∞. (15) c5(H, E) approaches 0 on Schema 2 as β approaches ∞. (16) c5(H, E) is greater on Schema 1 than on Schema 2 given any admissible value for β. But there are lots of examples where p(H | E) gets smaller and yet c5(H, E) gets larger. This is true of the example above involving Case 1 and Case 2. Hence it is not at all problematic for IP3f-IP3h that there are lots of examples where p(H | E) gets smaller and yet the relative distance between p(H | E) and p(H) gets larger.20 4.5 Is c4 adequate in the context of IP4? Suppose that: p(H1 | E1) = 0.99 > 0.02 = p(H1) > 0.01 = p(H1 | ¬E1) p(H2 | E2) = 0.99 > 0.03 = p(H2) > 0.01 = p(H2 | ¬E2) It follows by IP4b that c(H1, E1) = c(H2, E2). But, since p(H1 | E1) = p(H2 | E2) > p(H2) > p(H1), it follows that c4(H1, E1) > c4(H2, E2). Hence c4 runs counter to IP4b. Hence c4 is inadequate in the context of IP4.21 4.6 A general lesson It follows from all this that any valid argument in support of c4-GMA* for example-is unsound in the context of each of IP1, IP2, IP3, and IP4. For, any such argument has a false conclusion in the context of each of IP1, IP2, IP3, and IP4, and so, since any valid argument with a false conclusion has a false premise, any such argument has a false premise in the context of each of IP1, IP2, IP3, and IP4. 20 It is straightforward to show that c5 meets each of IP3b-IP3e and that c1, c2, c3, c6, and c7 do not. 21 It is straightforward to show that c7 meets each of IP4b-IP4e and that c1, c2, c3, c5, and c6 do not. 21 5 Conclusion I have argued that c4 is inadequate in the context of each of IP1, IP2, IP3, and IP4. I take this to be significant since each of IP1, IP2, IP3, and IP4 is a natural way of precisifying IP and since, further, each of them captures an important respect in which E can be related qua evidence to H. I have not argued, though, that this is the end of the road for c4. First, it could be that there is a fifth way of precisifying IP-distinct from IP1, IP2, IP3, and IP4-and that c4 is adequate in the context of that fifth way of precisifying IP. Second, it could be that there is a second sense of confirmation-distinct from IP-and that c4 is adequate in the context of that second sense of confirmation (or in the context of a certain way of precisifying that second sense of confirmation).22 Nothing in what I have argued closes off either such possibility. I have a worry, however, concerning any argument in support of c4 based in part on AC2.1. Suppose that p(H | E1) > p(H) and p(H | E2) > p(H). The idea behind AC2.1, it seems, is that if p(H | E1) > p(H | E2), then the distance (absolute or relative) between p(H | E1) and p(H) is greater than the distance (absolute or relative) between p(H | E2) and p(H). But this suggests the context of IP1 or the context of IP3, and, as I have argued, c4 is inadequate in each such context. My worry, then, is that any argument in support of c4 based in part on AC2.1 is at least tacitly based on a context in which c4 is inadequate. Acknowledgments I wish to thank an anonymous reviewer and Tomoji Shogenji for helpful very comments on prior versions of the paper. Appendix A Suppose that: p(H1 | E1) = 1 > p(H1) = 0.99 > p(H1 | ¬E1) = 0.01 p(H2 | E2) = 1 > p(H2) = 0.03 > p(H2 | ¬E2) = 0.02 p(H3 | E3) = 0.99 > p(H3) = 0.03 > p(H3 | ¬E3) = 0.01 p(H4 | E4) = 0.99 > p(H4) = 0.03 > p(H4 | ¬E4) = 0.02 22 It might be that c4 is best thought of in terms of confirmation in the sense of "partial tracking". See Roche (2016) and Roush (2005) for relevant discussion. 22 IP1 implies that c(H1, E1) < c(H2, E2) whereas IP2 implies that c(H1, E1) > c(H2, E2). Hence IP2 disagrees with IP1 on c(H1, E1) versus c(H2, E2). IP3 implies that c(H1, E1) = c(H2, E2). Hence IP3 disagrees with each of IP1 and IP2 on c(H1, E1) versus c(H2, E2). IP4 implies that c(H1, E1) = c(H2, E2). Hence IP4 disagrees with each of IP1 and IP2 on c(H1, E1) versus c(H2, E2). IP3 implies that c(H3, E3) = c(H4, E4) whereas IP4 implies that c(H3, E3) > c(H4, E4). Hence IP4 disagrees with IP3 on c(H3, E3) versus c(H4, E4). Hence no two of IP1, IP2, IP3, and IP4 agree with each other on all cases. QED Appendix B B.1 All instances of Schema 1 are such that: (17) p(H | E) > p(H ) iff 99 9800( ) 99 9800( ) β + 1 9800( ) β > 99 9800( ) + 199( ) β 99 9800( ) + 19800( ) β + 1 99( ) β + 2( )β iff 99 9800( ) 2( )β > 199( ) β 1 9800( ) β (18) 99 9800( ) 2( )β > 199( ) β 1 9800( ) β All instances of Schema 2 are such that: (19) p(H | E) > p(H ) iff 1 811008( ) β 1 811008( ) β + 1 8192( ) β > 1 811008( ) β + 1 2097152( ) β 1 811008( ) β + 1 8192( ) β + 1 2097152( ) β + 4( )β iff 1 811008( ) β 4( )β > 1 2097152( ) β 1 8192( ) β (20) 1 811008( ) β 4( )β > 1 2097152( ) β 1 8192( ) β 23 Given that (17) and (18) hold on all instances of Schema 1 and that (19) and (20) hold on all instances of Schema 2, it follows that (4) holds on all instances of Schema 1 and on all instances of Schema 2. QED B.2 It follows on Schema 1 that if β = 1, then: (21) p(H | E) = 99 9800 99 9800 + 1 9800 = 0.99 (22) p(H ) = 99 9800 + 1 99 99 9800 + 1 9800 + 1 99 +2 = 0.01 Next, observe that: (23) lim β→∞ p(H | E) = lim β→∞ 99 9800( ) 99 9800( ) + 19800( ) β = 1 (24) lim β→∞ p(H ) = lim β→∞ 99 9800( ) + 199( ) β 99 9800( ) + 198( ) β + 1 99( ) β + 2( )β = 0 That (5) holds on Schema 1 follows from (21) and (22). That (6) and (7) hold on Schema 1 follows from (23) and (24). QED B.3 It follows on Schema 2 that if β = 1, then: (25) p(H | E) = 1 811008( ) 1 811008( ) + 18192( ) = 0.01 (26) p(H ) = 1 811008( ) + 12097152( ) 1 811008( ) + 18192( ) + 12097152( ) + 4( ) ≈ 0.0000004 24 Next, observe that: (27) lim β→∞ p(H | E) = lim β→∞ 1 811008( ) β 1 811008( ) β + 1 8192( ) β = 0 (28) lim β→∞ p(H ) = lim β→∞ 1 811008( ) β + 1 2097152( ) β 1 811008( ) β + 1 8192( ) β + 1 2097152( ) β + 4( )β = 0 That (8) holds on Schema 2 follows from (25) and (26). That (9) holds on Schema 2 follows from (27). That (10) holds on Schema 2 follows from (28). QED B.4 First, observe that on Schema 1: (29) lim β→∞ c4 (H ,E) = limβ→∞ 99 9800( ) 99 9800( ) + 199( ) β 1 9800( ) β 1 9800( ) β + 2( )β = ∞ Second, observe that on Schema 2: (30) lim β→∞ c4 (H ,E) = limβ→∞ 1 811008( ) β 1 811008( ) β + 1 2097152( ) β 1 8192( ) β 1 8192( ) β + 4( )β = ∞ Third, it can be verified that the following holds for any admissible value for β: 25 (31) 99 9800( ) 99 9800( ) + 199( ) β 1 9800( ) β 1 9800( ) β + 2( )β < 1 811008( ) β 1 811008( ) β + 1 2097152( ) β 1 8192( ) β 1 8192( ) β + 4( )β That (11) holds follows from (29). That (12) holds follows from (30). That (13) holds follows from (31). QED B.5 First, observe that on Schema 1: (32) lim β→∞ c5 (H ,E) = limβ→∞ 99 9800( ) 99 9800( ) β + 1 9800( ) β − 99 9800( ) + 199( ) β 99 9800( ) + 19800( ) β + 1 99( ) β + 2( )β 1− 99 9800( ) + 199( ) β 99 9800( ) + 19800( ) β + 1 99( ) β + 2( )β = 0 Second, observe that on Schema 2: (33) lim β→∞ c5 (H ,E) = limβ→∞ 1 811008( ) β 1 811008( ) β + 1 8192( ) β − 1 811008( ) β + 1 2097152( ) β 1 811008( ) β + 1 8192( ) β + 1 2097152( ) β + 4( )β 1− 1 811008( ) β + 1 2097152( ) β 1 811008( ) β + 1 8192( ) β + 1 2097152( ) β + 4( )β = 0 Third, it can be verified that the following holds for any admissible value for β: 26 (34) 99 9800( ) 99 9800( ) β + 1 9800( ) β − 99 9800( ) + 199( ) β 99 9800( ) + 19800( ) β + 1 99( ) β + 2( )β 1− 99 9800( ) + 199( ) β 99 9800( ) + 19800( ) β + 1 99( ) β + 2( )β > 1 811008( ) β 1 811008( ) β + 1 8192( ) β − 1 811008( ) β + 1 2097152( ) β 1 811008( ) β + 1 8192( ) β + 1 2097152( ) β + 4( )β 1− 1 811008( ) β + 1 2097152( ) β 1 811008( ) β + 1 8192( ) β + 1 2097152( ) β + 4( )β That (14) holds follows from (32). That (15) holds follows from (33). That (16) holds follows from (34). QED References Brössel, P. (2013). The problem of measure sensitivity redux. Philosophy of Science, 80, 378-397. Christensen, D. (1999). Measuring confirmation. Journal of Philosophy, 96, 437-461. Crupi, V., and Tentori, K. (2013). Confirmation as partial entailment: A representation theorem in inductive logic. Journal of Applied Logic, 11, 364-372. Crupi, V., and Tentori, K. (2014). Erratum to "Confirmation as partial entailment". Journal of Applied Logic, 12, 230-231. Fitelson, B. (1999). The plurality of Bayesian measures of confirmation and the problem of measure sensitivity. Philosophy of Science, 66, S362-S378. Glass, D., and McCartney, M. (2015). A new argument for the likelihood ratio measure of confirmation. Acta Analytica, 30, 59-65. Hajek, A., and Joyce, J. (2008). Confirmation. In S. Psillos and M. Curd (Eds.), The Routledge companion to philosophy of science (pp. 115-128). London: Routledge. Iranzo, V., and Martinez de Lejarza, I. (2013). On ratio measures of confirmation: Critical remarks on Zalabardo's argument for the likelihood-ratio measure. Journal for General Philosophy of Science, 44, 193-200. Joyce, J. (1999). The foundations of causal decision theory. Cambridge: Cambridge University Press. 27 Joyce, J. (2008). Bayes' theorem. In E. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2008 ed.). URL = <http://plato.stanford.edu/archives/fall2008/entries/bayes-theorem/>. Milne, P. (1996). log[P(h/eb)/P(h/b)] is the one true measure of confirmation. Philosophy of Science, 63, 21-26. Roche, W. (2015). Evidential support, transitivity, and screening-off. Review of Symbolic Logic, 8, 785-806. Roche, W. (2016). Confirmation, increase in probability, and partial discrimination: A reply to Zalabardo. European Journal for Philosophy of Science, 6, 1-7. Roche, W., and Shogenji, T. (2014). Dwindling confirmation. Philosophy of Science, 81, 114-137. Roush, S. (2005). Tracking truth: Knowledge, evidence, and science. Oxford: Oxford University Press. Schlesinger, G. (1995). Measuring degrees of confirmation. Analysis, 55, 208-212. Zalabardo, J. (2009). An argument for the likelihood-ratio measure of confirmation. Analysis, 69, 630-635.