Rationally Maintaining a Worldview, Chris Ranalli

Most attention in the epistemology of disagreement has centered around epistemic peer disagreement. Epistemic peer disagreements are disagreements between epistemic peers over a proposition, p, where epistemic peers are people who are roughly equal to each other in terms of intelligence and intellectual ability, and equally likely to evaluate the evidence relevant to the disputed proposition correctly. Examples of epistemic peer disagreement include: people at a dinner party who disagree about the total cost for each patron’s dinner after a 20% tip, each of whom have equally reliable arithmetical abilities and awareness of the receipt; a couple who disagree about whether the local café serves decaf espresso but who frequent the café equally and have similar memorial abilities; or even a group of professional cosmologists who disagree about the fate of the universe. Epistemologists tend to think that rationally responding to disagreements of this type do not turn on substantial disputes about fundamental principles about evidence, rationality, existence, or morality. Instead, the disagreements are sufficiently ‘local’, in the sense that even if the disputants were committed to contrary fundamental principles, their disagreement over the restaurant check, or the local café, or even the fate of the universe, wouldn’t depend on those differences. … [please read below the rest of the article]

Image credit: Anna Guerrero from Pexels

Article Citation:

Ranalli, Chris. 2020. “Rationally Maintaining a Worldview.” Social Epistemology Review and Reply Collective 9 (11): 1-14. https://wp.me/p1Bfg0-5uJ.

🔹 The PDF of the article gives specific page numbers.

This article replies to:

❧ Lougheed, Kirk. 2020. “The Epistemic Benefits of Worldview Disagreement.” Social Epistemology. Published online: https://www.tandfonline.com/doi/abs/10.1080/02691728.2020.1794079

1. Introduction

In recent years, epistemologists have turned their attention to deep disagreements or worldview disagreements.[1] Deep disagreements are disagreements over the fundamental principles of one’s worldview.[2] For example, consider the following paradigm case:

Creationism/Naturalism. Mia is a Christian creationist. She believes that all life on Earth, and the Earth itself, was created by God as described by Genesis (that the first human beings were Adam and Eve, and so on). She also believes the doctrines espoused by the New Testament literally and in its entirety. Pia is an atheistic naturalist. She denies everything that Pia believes in this regard. She believes that there is a natural, purely non-intentional explanation of life on Earth; that the Earth originated 4 billion years ago by the accumulation of particles orbiting around a protostar, colliding to form larger bodies; that God does not exist; that Jesus never rose from the dead; that the Bible carries no epistemic authority over and above historical and anthropological matters; and so on. In turn, Mia and Pia reject each other’s various fundamental metaphysical and epistemological commitments, the ones constitutive of their creationist and naturalist worldviews, respectively. When they exchange reasons for their views, they reject each other’s reasons, arguing that their reasons presuppose commitments they reject.

Deep disagreements like the creationist/naturalist case are examples of worldview disagreements. Worldview disagreements, unlike ordinary disagreements, are not over one proposition but many. Worldviews are a package deal. If you believe that God is an intentional agent who created all life on Earth, you are not a naturalist. Naturalists believe that the world is entirely physical or at least explained entirely in scientific terms. The naturalist denies not simply that Jesus rose from the dead but that anyone could be raised from the dead as a miracle or supernatural act. The naturalist rejects every and all supernatural explanations. The creationist accepts some supernatural explanations.

Worldview disagreements tend to be deep because they commit disputants to disagreeing over their fundamental metaphysical and epistemological commitments.[3] For example, the Christian creationist will believe that the Bible can be a source of evidence about various supernatural events, and so goes beyond historical and anthropological evidence. The atheistic naturalist denies this. It is part of the Christian’s metaphysics that God exists; that Jesus existed and was more than a human being; that there are souls, that miracles can occur, and so forth. The naturalist denies all of this. For the naturalist, only physical matter and energy exists. Only what can ultimately be explained by science should be believed. Their disagreement is deep precisely because it involves a commitment to disagreeing about these various fundamental metaphysical and epistemological commitments, whereas more ordinary disagreements, such as whether the grocery store will close early on Sunday, or how much a 20% tip will be for a large party that spent $368.71 for dinner, do not. Mia and Pia’s disagreement about the origins of the Earth and life, for example, will ultimately lead them to disagree about creationism/naturalism, whereas their ordinary disagreements stop with common ground.

How, from an epistemological point of view, should disputants respond to such disagreements? As with the epistemology of peer disagreement literature, there are proponents of conciliatory and steadfast responses. Proponents of the conciliatory view will say that you should always give your interlocuter’s belief in their fundamental principles some epistemic weight, thereby decreasing your confidence in your own fundamental principles (Feldman 2005, Matheson 2018). Proponents of the steadfast view, by contrast, will say that you sometimes rationally ought to maintain your fundamental principles in the face of deep disagreement (Kappel 2018). Broadly speaking both views accept an evidentialist or veritistic conception of epistemic rationality, but many Wittgensteinians say that our fundamental principles lie outside the scope of epistemic rational evaluation so understood. On their view, our fundamental principles are not in the market for evidential evaluation or evaluation which indicates whether or not they are true (Wright 2014). However, while Wittgensteinians aren’t evidentialists, they can still allow that our committal propositional attitudes to our fundamental principles can be subject to certain epistemic rationality norms or else be partly constitutive of epistemic rationality (Coliva and Palmira 2020; Ranalli 2018b). For this reason, there is room for a non-evidentialist steadfast response to deep disagreement (Coliva 2020 and Palmira; Ranalli 2018b).

Kirk Lougheed (2020) has recently developed an interesting position in his paper “The Epistemic Benefits of Worldview Disagreement”, published in Social Epistemology. Whereas the Wittgensteinians tend to think that our attitudes to our fundamental principles are either arational or else can be rational for non-evidential reasons, while standard steadfasters and conciliationists agree that the rationality of our attitudes to our fundamental principles turns on the evidence one currently possesses, disagreeing only about what epistemic rationality requires of our responses to the evidence in such cases, Lougheed’s position is that it is at least partly a matter of the potential epistemic benefits that arise from maintaining one’s belief that their fundamental principles are correct (i.e., that their worldview is true). While this view is undoubtedly interesting and worth exploring in more detail, what I want to do here is present some reasons for doubting whether his main argument, the so-called “Epistemic Benefits of Worldview Disagreement Argument”, sufficiently supports that position. I’ll be arguing that the two key principles which underwrite his argument are probably mistaken and for this reason—although Lougheed has carved out an interesting position in the epistemology of deep disagreement—it is not yet adequately supported. This of course leaves room for updating or altering the argument so that it succeeds. My hope is that this paper can stimulate such a reaction.[4]

2. The Epistemic Benefits of Worldview Disagreement Argument

Here is Lougheed’s argument in full:

(1) If agent S encounters epistemic peer disagreement over proposition P and subsequently discovers that disagreement over P entails a disagreement over her worldview W (a set of propositions including P), then in order to rationally maintain W she should examine whether W is theoretically superior to the competing worldview.

(2) If S evaluates the theoretical virtues of W, then S will gain a better understanding of W, including being better informed about the truth of W.

(3) S discovers an epistemic peer who believes not-P.

(4) S subsequently discovers that the disagreement about whether P entails a disagreement between two competing worldviews W and W*.

Therefore,

(5) In order to rationally maintain W, she should examine whether W is theoretically superior to W*.

Therefore,

(6) S should evaluate the theoretical virtues of W.

Therefore,

(7) S will gain a better understanding of W, including being better informed about the truth value of W (see Lougheed 2020, 6).

The two key principles of his argument are expressed by premise 2 and subconclusion 5. Subconclusion 5 is entailed by 1, 3, and 4 because it is just unpacking the principles’ assumptions about worldview disagreement: for it discharges the assumptions about discovering an epistemic peer who holds a contrary worldview:

Theoretical Superiority Examination: In order to rationally maintain W, she should examine whether W is theoretically superior to W*.

That’s the first half of his argument. The second half crucially relies on the following principle, expressed by premise 2:

Theoretical Evaluation → Benefits Principle: If S evaluates the theoretical virtues of W, then S will gain a better understanding of W and become better informed about the truth of W.

6 affirms the antecedent of this principle, from which the conclusion 7 follows. By conditional introduction, one can infer that if you encounter an epistemic peer who disagrees with your worldview, holding a contrary worldview W*, then you should examine the theoretical superiority of your worldview, whereby doing so will yield a bettering understanding of W and a better position to evaluate whether W is true.

The position is undoubtedly interesting. Part of its interestingness lies in the fact that Lougheed intends for this argument to demonstrate why it can be epistemically rational for one to maintain their doxastic attitude DA to W in response to peer disagreement over W. The basic idea is that there are “epistemic benefits” grounded in “better understanding W” as well as “being better informed” about W’s truth-value. By engaging dialectically with someone who is reasonable and yet holds a contrary worldview, you stand to benefit epistemically because you should examine your own worldview as well.

3. On Theoretical Superiority Examination

One problem is that it’s not clear whether the Theoretical Superiority Examination principle is supposed to be a necessary or a sufficient condition for epistemically rationally maintaining one’s worldview:

Necessity of Theoretical Superiority: if S rationally maintains W, then S should rationally examine whether W is theoretically superior to the competing presented worldview W*.[5]

Sufficiency of Theoretical Superiority: If S should rationally examine whether W is theoretically superior to the competing presented worldview W*, then S rationally maintains W.

We might be able to settle this interpretive question more easily by applying the principle of charity to see what makes the argument most plausible but I think that is somewhat difficult in this case because both principles are highly problematic. I’ll be arguing that the Necessity of Theoretical Superiority requirement is too intellectually demanding—and thereby plausibly incorrect—and that the Sufficiency of Theoretical Superiority requirement is typically irrelevant vis-à-vis rational belief in worldviews, which suggests that it is incorrect as well.  As we will see, both principles are problematic because it’s not clear that mere examination of worldviews is required in any case. That’s the point we’ll turn to next.

3.1. Against Mere Examination

Both the Necessity of the Theoretical Superiority and Sufficiency of Theoretical Superiority requirements imply that disputants ought to merely examine each other’s worldviews, but people can successfully examine each other’s worldviews in ways that are intellectually vicious and thereby at odds with epistemic rationality (or at least the goals of the intellectually virtuous agent). For example, Jonas the scientistic atheist might disagree with Martha the evangelical theist only for sport or ‘intellectual combat’, where the goal is to humiliate her and other theists rather than conscientiously work with them through the intricacies of their worldview with equal respect for their capacity as epistemic agents (and vice-versa). They might examine each other’s views simply to refute them.

In this way, disputants can successfully examine each other’s worldviews but in merely vicious ways. For example, one might examine their disputant’s worldview:

Myside-biased Inquiry: with only an eye to criticism; of finding out what’s wrong with their opponent’s worldview and not at all their own so as to refute their opponent. They look for only the good-making features of their own worldview, while looking for only the bad-making features of their disputant’s worldview.

Unserious Inquiry: in a completely unserious way, such as when a person antecedently considers the worldview to be completely ‘crazy’ and not worth considering seriously.

Lougheed might say that a myside-biased way of examining one’s worldview and their opponent’s worldview is at odds with treating their opponent as an epistemic peer, but I don’t think this would right. You might sincerely believe that your opponent is just as likely as you are to be correct, but still examine their worldview as well as your own in a myside-biased way. For people typically aren’t aware that they are engaging in such biased evaluation towards their own view (Kruglanski and Boyatz 2012).

There is a stronger tension between Unserious Inquiry and recognition of epistemic peerhood. For how could one regard one’s disputant as an epistemic peer and yet inquire into their worldview in an unserious way? (or even their own worldview in an unserious way?) I think this is possible in the following way: we sometimes think that our peers believe weird things without lacking confidence in their general capacity for rational evaluation. The recognition of peerhood doesn’t require a general tendency to take everything they believe seriously. Sometimes our peers make obvious mistakes, or make unobvious mistakes that are more easily recognized by one’s interlocuter than oneself. Indeed, this might be especially salient in the case of worldview disagreement, since worldview belief is predominantly a function of identity preservation and emotional regulation, rather than an unbiased evaluation of evidence. In this way, people who oppose our worldviews might be in a better position to find out what errors they contain (if any) than we are because of our tendency to evaluate those positions more closely connected to our identities in biased ways compared with our more ordinary beliefs.

For this reason, both requirements need to be amended. One way to do this would be to build in an open-mindedness component. Think of it like this: when you dialectically engage someone who is your peer, but holds a different (and contrary) worldview, you shouldn’t simply examine their worldview. For you can do that in a myside-biased way or in an unserious way. Instead, you should do so open-mindedly. What do I mean by ‘open-minded examination?’ I think Heather Battaly’s (2018) conception of open-mindedness is extremely helpful in this sort of case, since on her account open-mindedness is a matter of considering seriously the relevant alternatives to one’s beliefs. In this case, to open-mindedly examine a worldview would be to consider that worldview seriously, if indeed it is a relevant alternative to one’s worldview. Whether the worldview is relevant will depend on factors like whether one antecedently has good reason to believe that the alternative is likely to be false (Battaly 2018, 269). For example, suppose your colleague is otherwise reasonable but you sadly discover that she believes Holocaust denialism, a morally repugnant theory. This might just be one major feature of a more general repugnant worldview. If you fail to examine that worldview seriously, it doesn’t seem like a failure of rationality on your part: for that worldview just isn’t a relevant alternative to your own.

So, we should amend the generic Theoretical Superiority Examination requirement as follows:

Theoretical Superiority Open-minded Examination: In order to rationally maintain W, S should open-mindedly examine whether W is theoretically superior to W*.

For respecting this principle would hedge against myside-biased inquiry as well as unserious inquiry.

3.2. Against Necessity

Recall that the Necessity of Theoretical Superiority requirement says that:

Necessity of Theoretical Superiority: if S rationally maintains W, then S should rationally examine whether W is theoretically superior to the competing presented worldview W*.

The goal of this section is to argue that open-mindedly examining the comparative theoretical superiority of one’s worldview is better understood as an epistemic ideal rather than a requirement of epistemic rationality in response to worldview disagreement.

What is it for one worldview to be theoretically superior to another? Lougheed argues that we need to evaluate worldviews in accordance with a set of criteria, which include (see page 8, Lougheed 2020):

External coherence: one should to check, in non-biased ways, whether W fits the current scientific data; is explanatory of the data; is simple; and is internally consistent.

Internal coherence: one should check, in non-biased ways, whether the propositions in W are consistent with each other.

Explanatory scope: one should check, in non-biased ways, whether W explains all of the relevant phenomena.

Simplicity and Parsimony: one should check, in non-biased ways, the number of kinds as well as the number of entities that W is committed to; S should have good reason to believe that W is parsimonious.

Predictive power: one should check, in non-biased ways, how predictively powerful W is.

The problem is that this is just too intellectually demanding. If evaluating the theoretical superiority of your worldview W1 and your peer’s contrary worldview W2 in this sense were a requirement for you to rationally maintain W1, then almost no one except the Vulcans could rationally maintain worldviews. To so much as rationally maintain a worldview, one would have to undertake a certain kind of theoretical project that would be difficult and arduous for even the most astute, intelligent, learned, and virtuous among us.

Indeed, while one might be cautious here and think that perhaps Lougheed means for the Necessity of Theoretical Superiority to hold for only ideally rational agents, the evidence suggests otherwise. He writes that in the “real-life disagreements” he is interested in, “there will be numerous disagreements that are ultimately worldview disagreements” (Lougheed 2020, 7). So we can assume that it applies to my grandmother, your uncle, the clerks at the local grocery store, philosophy professors, the Pope, gardeners, scientists, political pundits, and so on. Any of these people might find themselves in worldview disagreements with other people and, when they do, they would rationally maintain their worldview only if they examine the theoretical superiority of their worldview compared with their peer’s worldview. This requires them to engage in external coherence evaluation, internal coherence evaluation, explanatory scope evaluation, simplicity evaluation, and predictive power evaluation, since that’s just what it is to examine the theoretical superiority of their worldview. This might be possible for some people but not many.

Our normative principles should not be so demanding that most people couldn’t meet the principles’ demands. It’s just unrealistic that most people could do this, much less the person trying to make ends meet at the local grocery store. But it’s not unreasonable for them to have worldviews. The Necessity of Theoretical Superiority requirement seems to have the uncomfortable consequence that worldviews are the special reservation of the intellectually virtuous; of the epistemic elite, who have the knowledge and ability to undertake the theoretical project of theoretically examining worldviews so understood; something apparently required by epistemic rationality as per the Necessity of Theoretical Superiority requirement. Almost everyone else would irrationally maintain their worldview. Put differently: almost everyone should lack a worldview, or at least as soon as they become aware of a reasonable person who holds a contrary worldview. This is why I want to urge us to think of the theoretical examination of worldviews as a normative epistemic ideal rather than a requirement of rationally maintaining worldviews. People who open-mindedly theoretically examine their worldview in response to worldview disagreement are praiseworthy, but the failure of one to undertake such an extensive theoretical project would not be blameworthy, but routine and expected.

Connected to the demandingness objection lamented here is the idea that the Necessity of Theoretical Superiority requirement implies skepticism about the epistemic rationality of worldviews. For if it really were a requirement of rationality that we undertook such theoretical evaluation of worldviews in response to worldview disagreement, our likely failure suggests that we don’t rationally maintain our worldviews after all. Perhaps this is a virtue of the Necessity of Theoretical Superiority requirement, for social psychology suggests that the psychology of worldview maintenance and core belief evaluation is not a matter of evaluating the evidence in unbiased ways or checking the theoretical virtues of our worldviews but of identity and emotional regulation, which also suggests, bracketing pragmatic and moral encroachment, that most people do not rationally hold their worldviews, since identity-protective cognition and emotional regulation are not oriented towards accuracy, even though it might be instrumentally rational for the agent to engage in such biased evaluations. (Kahan 2013). The goal of having accurate beliefs conflicts with the goal of maintaining our identities (Van Bavel and Pereira 2018). So the epistemology and the social psychology would match here. The problem for Lougheed, insofar as he endorses the Necessity of Theoretical Superiority requirement, is that he presumably wanted rational worldview belief to be available to the average human being but his proposed requirement for epistemic rationality is so demanding that we would all likely fail to rationally hold our worldviews as a result.

Finally, the motivation for why one should comparatively examine the theoretical superiority of one’s worldview against the presented competitor relies upon a contentious analogy between scientific theories and worldviews. Lougheed says:

[1] “Why think that the rationality of W depends on how it exemplifies various theoretical virtues? I have two primary responses to this question. First, consider that worldviews are, at least in part, explanations of the features of the universe. Worldviews are comprehensive theories of the universe. They are theories of everything” (Lougheed 2020, 7).

Let’s grant [1]. Even if worldviews are general theories of the universe, this doesn’t imply that the rigorous standards employed by science or other areas of academic inquiry are necessary for ordinary non-theoreticians to rationally maintain their worldviews in response to disagreement. After all, many people believe various scientific theories, like the Theory of Evolution by natural selection, the Big Bang Theory, General and Special Relativity Theory, and so on without having engaged in any comparative theory examination (and wouldn’t be able to do so competently in response to disagreement either). This might be required of people in their capacity as scientists but that doesn’t show that such local disciplinary or even epistemic norms spill-over into the everyday world. We typically permit belief in mainstream scientific theory on the basis of testimony or routine school instruction without a more involved comparative examination of theories. If ordinary non-scientists can rationally believe these scientific theories without fulfilling the demands of the analogue requirement for rational belief, why think it should be any different for worldviews qua theories?

3.3. Against Sufficiency

The Sufficiency of Theoretical Superiority requirement is suggested by the following passage:

“Second, there could be other things required of S in order to rationally maintain W. To my mind, understanding worldviews as theories is the simplest and most accurate way to compare competing worldviews. But I do not deny it might be necessary to employ other methods. Thus, I claim that examining the theoretical virtues of W is sufficient for the rationality of W, though perhaps not both necessary and sufficient” (ibid 7).

The thought is that:

A. The fact that S should (and actually does) examine whether their worldview W is theoretically superior to the competing presented worldview W*

is by itself enough for:

B. S rationally maintaining W.

It’s not clear that A entails B. How could the result of one’s examination that W is theoretically superior to W* by itself be enough for rationally believing W? Before I argued that it wasn’t necessary because it’s too intellectually demanding but now I want to suggest that it’s not sufficient either.

To see why, consider the following example:

Comparative Anthropology. Suppose an anthropologist specializing in emerging new age religious worldviews is undertaking an ethnographic study of new age religious groups in the southwest U.S. and sustains a comparative open-minded theoretical examination of their worldviews. The result of her study is substantial evidence that group A’s worldview is theoretically superior to group B’s.

Is this sufficient for our anthropologist to believe that A’s worldview is probably correct? Intuitively not. Suppose she is agnostic about whether A’s worldview is correct and whether B’s worldview is correct and thereby suspends judgment on A and B’s worldviews. For she only uncovered more theoretical virtues of A over B, and not yet any reason to believe that A is correct; nothing which indicates that the key commitments of worldview A are likely to be true. The comparative structural features and theoretical virtues of a theory needn’t be indicative of whether the theory is true. It is traditionally a challenge for coherence theories of justification to account for the link between coherence and truth and that problem is reproduced here when we think about the various structural properties of theories, such as whether it is predictively powerful, internally coherent, externally coherent, parsimonious, has wide explanatory scope and actually has the resources to explain the relevant phenomena. These are certainly important features of theory-building and theory-acceptance but it’s not clear that the discovery of highly important structural features of a theory or other theoretical virtues would be enough for one rationally to believe that theory is true.

This raises a more general question for Lougheed. The question is whether undertaking open-minded examination of the theoretical superiority of one’s worldview W over the competitor W* is sufficient (or necessary) for rationally preserving one’s worldview in response to fully disclosed worldview disagreement, or would it be sufficient (or necessary) for rationally adopting one’s worldview in the first place? If the latter, then I would disagree. I don’t see how comparative theoretical superiority could be necessary or sufficient for adopting a worldview. This is because, as I have argued, it seems overly intellectually demanding (if intended to be necessary) and otherwise not sufficiently indicative of truth (if intended to be sufficient). This naturally leads one wonder: if it’s neither plausibly necessary nor sufficient for rationally adopting an attitude of belief towards one’s worldview, why would it be necessary or sufficient for rationally retaining belief in one’s worldview? Here is where I think Lougheed is on stronger ground. We might think that once one already rationally believes W, the presentation of an alternative worldview W* by a peer who believes W* is a potential defeater for S’s belief in W. So, S wouldn’t need to jump through all of the hoops that rationality demands of one in forming the belief that W is correct; it would be enough for one to engage in comparative open-minded examination of the theoretical superiority of their worldview W over the competitor W* to rationally maintain belief in W.

As before, however, even if it’s enough for one to engage in the comparative examination of the theoretical superiority of their worldview W over the competitor W* to preserve the rationality of their belief in W, very few people will be able to do it. Is anything else sufficient? If not, then we run another epistemological risk here: for people might too easily lose a rational belief—indeed, their worldview—since it will be too challenging to undertake the theoretical project implied by comparative theoretical superiority examination.

4. Against the Theoretical Virtue → Benefits Principle

The second key principle Lougheed invokes is a principle that links evaluating the theoretical virtues of one’s worldview with gaining epistemic benefits from such evaluation:

Theoretical Evaluation → Benefits Principle: if S evaluates the theoretical virtues of W, then S will gain both a better understanding of W and become better informed about the truth-value of W (see Lougheed 2020, 6. My emphasis and principle name).

It is necessary to explore what ‘better understanding of W’ means as well what being ‘better informed’ vis-à-vis the truth-value of one’s worldview W means in order to properly evaluate this principle. Lougheed says that:

Understanding involves grasping different concepts and how they relate to one another. Thus, agents will gain a better understanding of their worldview by weighing it against different theoretical virtues. This is clearly an epistemic benefit (Lougheed 2020, 9).

I think that Lougheed is probably right here, although it’s worth emphasizing that this is still an idealization. We sometimes stand to gain the benefit of understanding the conceptual connections within our worldviews and between our worldviews and competitor worldviews when we engage in theoretical superiority examination; it puts us in that position if we are sufficiently intellectually virtuous already. Moreover, by weighing our worldview against the list of theoretical virtues—simplicity, parsimony, internal and external coherence, explanatory scope and predictive power—we learn more fine-grained details about our worldviews. We are prone to identify the virtues, but also prone to misidentify the vices. I don’t want to deny that there is some epistemic payoff here.

There are three questions I want to pursue concerning whether theoretical virtue understanding is sufficiently epistemic to rationalize belief. First, I wonder whether theoretical virtue understanding is enough to make it presently rational for one to maintain their doxastic attitude towards their worldview in response to disagreement. For Lougheed (2020) is not simply trying to demonstrate that inquiry into the theoretical virtues of one’s worldview has epistemic benefits but that its epistemic benefits rationalize the keeping of one’s present doxastic attitude to their worldview in response to disagreement (akin to his 2020 book project, where he writes: “The answer to the question of what I ought to believe right now in the face of epistemic peer disagreement ought to be answered by considering both synchronic and diachronic reasons” Lougheed 2020, pg. 103). Perhaps some epistemic benefits can make the retention of one’s attitude rational, but it’s not clear that theoretical virtue understanding is enough. Isn’t that just too weak? This worry is consistent with the idea that epistemic benefits are sometimes sufficient to rationalize belief, but that the kind of epistemic benefits that theoretical virtue understanding consists in is not good enough. Put generally: the kind of epistemic benefits on offer matter.

Connected to the previous question is the worry that theoretical virtue understanding is not really an epistemic good or benefit at all. Epistemic goodness is standardly taken to be goodness vis-à-vis truth but it’s not clear what grasping the conceptual relations within one’s worldview or its structural theoretical virtues has to do with truth as such. Is grasping conceptual connections likely to indicate that your worldview is true? Are theoretical virtues reliable indicators of truth? These questions need answers.[6] Of course, one might say: “look, it’s not merely theoretical virtue understanding that warrants maintaining one’s belief that W in the face of disagreement but the fact that one gains lots of true belief: that their worldview W has concepts c1, c2, c3, as constituent parts, that c1 is connected to c2 in such-and-such ways, that W is explanatory of phenomena p1, p2, p3, and so on. These are all true beliefs one gets by acquiring an understanding of the theoretical virtues of their worldview”. So, the basic thought is that truth is a potential benefit to be gained here as well. The trouble is that this benefit is always available to us. For any belief one might have, that x is F (fill it out however you want), even if that belief is false, one stands to gain the epistemic benefits of grasping the concepts expressed by ‘x’ and ‘F’ as well as the truths about x’s and F’s internal conceptual relations and relation to other concepts. It’s easy to add truth; too easy.

There is a threshold question here as well. How much future epistemic benefit is enough to rationalize present belief? For example, there’s some epistemic benefit to having a coherent yet mostly false set of beliefs, namely, the epistemic benefits of coherence and stable beliefs over time compared with an incoherent set of beliefs, one in which there are few if any support relations between them. And, if we are interested in self-knowledge, there are also the higher-order beliefs about what one believes, which could all be true: even if one’s first-order belief that p is false, they’re normally in a position to know that they believe p, and likewise for their other first-order beliefs that q, r, s, and so forth, all of which might hang together in a coherent framework. So, there is epistemic benefit to holding a coherent, mostly false worldview, since one stands to gain lots of truths by way of their higher-order beliefs. If the epistemic benefits of believing W as such were good enough to believe W, then you would have a reason to hold W because of it has epistemic benefits for you. If it’s epistemic benefits could be all of your true higher-order beliefs about W, despite W’s systematically falsity, we might worry that this too easily rationalizes worldview belief.

The final question asks what exactly is the epistemological relationship between one’s present belief in W in the face of deep disagreement and the future epistemic benefits which might accrue from such an attitude?[7] Can future benefits fully justify a belief or is it only partial justification? Here’s an example to help better structure the question. Suppose W is false but S retains W because it has many more theoretical virtues over the competitor W* (perhaps it is maximally theoretically virtuous). In the distant future of S’s life, she learns that W* is actually correct, and suppose she wouldn’t have learned that W* is true, which implies that ~W, without sticking to her initial belief that W all these years. For perhaps she wouldn’t have attended the International Conference on W, where she learned that W* is true. In a twist, then, what made it rational for her to maintain W in response to her present disagreement over her worldview is the fact that she gets the good of true belief that W* in the future. It is the epistemic status of a different and future doxastic attitude that rationalizes her present false doxastic attitude. This picture just seems fishy and I invite Lougheed to explore the details. When we zoom in on a simpler case, like my belief that p, that there’s gold buried under my apartment (a false belief), it’s very hard to see how I might be rational in maintaining that belief in the face of disagreement with my friend if only because I would have believed truly ~p later if I presently believe that p. The benefit of later believing truly that ~p simply looks on its face to be irrelevant to whether I’m rational in presently believing that p. So, my hope is that Lougheed can fill in the details for us here so that we can better understand the relationship between epistemic benefits and rationality.

References

Battaly, Heather (2018). “Close-mindedness and Dogmatism”, Episteme 15, 3, 261-282.

Coliva, Annalisa and Michele Palmira (2020) “Hinge disagreement”, in Kusch, M. (ed.) Social Epistemology and Epistemic Relativism, Routledge, pp. 11-29.

Lougheed, Kirk (2020) “The Epistemic Benefits of Worldview Disagreement”, Social Epistemology, doi: 10.1080/02691728.2020.1794079.

Lynch, Michael (2016). “After the Spade Turns: Disagreement, First Principles and Epistemic Contractarianism”. International Journal for the Study of Skepticism, issue 6, pp. 248-259.

Lynch Michael (2010) “Epistemic Disagreement and Epistemic Incommensurability”, in Haddock A, Miller A, Pritchard D (eds.) Social Epistemology. Oxford: Oxford University Press, pp 262–277.

Kahan, Dan (2013). “Ideology, Motivated Reasoning, and Cognitive Reflection: An Experimental Study,” Judgment and Decision Making 8(4): 407-24.

Kappel, Klemens. (2018). “Higher order Evidence and Deep Disagreement”, Topoi. https://doi.org/10.1007/s11245-018-9587-8.

Kruglanski, Arie W. and Lauren M. Boyatzi (2012) “The Psychology of Closed and Open Mindedness, Rationality, and Democracy”, Critical Review, 24:2, 217-232, DOI: 10.1080/08913811.2012.711023.

Matheson, Jonathan (2018). “Deep Disagreements and Rational Resolution”, Topoi. https://doi.org/10.1007/s11245-018-9576-y.

Ranalli, Chris (2018a) “What is Deep Disagreement?”, Topoi. https://doi. org/10.1007/s11245-018-9600-2.

Ranalli, Chris (2018b) “Deep Disagreement and Hinge Epistemology”, Synthese 1–33. https://doi: 10.1007/s11229-018-01956-2.

Van Bavel, Jay J. and Andrea Pereira (2018) “The Partisan Brain: An Identity-Based Model of Political Belief”, Trends in Cognitive Science, vol. 22, no. 3. https://doi.org/10.1016/j.tics.2018.01.004.

Wright, Crispin (2014) “On Epistemic Entitlement II: Welfare State Epistemology”, Scepticism and Perceptual Justification. Dodd and Zardini (eds.). Oxford: Oxford University Press.


[1] See Lougheed (2020). Lougheed talks about ‘believing a worldview’ but it seems to me that one cannot in the strict sense believe a worldview unless a worldview is a proposition. One cannot believe a set of propositions either, but rather only each member of the set. For this reason, we should read ‘believe the worldview W’ or ‘believe that W is true’ as shorthand for believing that the constitutive principles or commitments of the worldview are true.

[2] Lynch (2010, 2016) says that deep disagreement consists in disagreement over fundamental epistemic principles, but I argued in my (2018a) that this is too restrictive and should thereby be expanded to include fundamental normative principles generally as well as fundamental metaphysical principles.

[3] Perhaps it is possible to have a non-deep worldview disagreement, in the sense that while the disputants agree on their fundamental principles, they radically disagree over their implications. It’s also important to note that fundamental principles can be more or less fundamental. Perhaps the Christian and the Naturalist agree one some fundamental principles of commitments, like “perception is reliable” and “physical objects exist” but they disagree over other principles and commitments which are constitutive of their respective worldviews, such as that “God exists/does not exist”,  that “reality is/not entirely physical” or “miracles are possible/impossible”.

[4] Many thanks to the editors of the SERRC for the opportunity to comment on Lougheed’s very interesting and stimulating paper.

[5] As an interpretative linguistic exercise, I think the Necessity of Theoretical Superiority requirement is what Lougheed is committed to. Lougheed writes: “In order to rationally maintain W, she should examine whether W is theoretically superior to W*”. The syntax of this phrase initially suggests that in order to A, B, where the second claim B is intended to be sufficient for A. To test this theory, let’s replace token sufficiency claims in epistemology to fit that syntactic structure. For example, consider the following sufficiency theses, the K-Norm for Assertion and Phenomenal Conservatism, restated to fit the syntactic structure of ‘in order to A, B’, above:

In order to know that P, S should assert that P.
In order for it seem to one that P, S rationally believes that P.

When we replace the key sufficiency claims in the ‘in order to A, B’-structure, we get the wrong views. So, this suggests that Lougheed had in mind the Necessity of Theoretical Examination principle. There is also some textual evidence that Lougheed maintains the Necessity of Theoretical Superiority requirement. He asks: “Why think that the rationality of W depends on how it exemplifies various theoretical virtues?” That the rationality of believing W depends on whether it exemplifies theoretical virtues suggests that theoretical virtue examination is necessary for rationally believing W, not sufficient.

[6] It’s difficult to interpret Lougheed views here on the connection between epistemic benefits and epistemic rationality. On the one hand, his view seems to be that insofar as there’s no sufficiently strong evidence against believing W, believing W can be epistemically rational for one in response to disagreement if believing W has upstream epistemic benefits (whether for oneself or others). On the other hand, he says in his paper that he has “been assuming throughout that some version of evidentialism is true” (Lougheed 2020, 10). But evidentialism says that you are epistemic rational or justified in believing that p at t if and only if your evidence supports p at t. What distinguishes evidentialism from other theories of epistemic rationality or justification is that evidence is necessary and sufficient for epistemic rationality or justification. Lougheed’s view is clearly at odds with that. So, I think Lougheed needs to clarify what his position is or what it implies more generally.

[7] In Chapter 5 of his 2020 book The Epistemic Benefits of Disagreement (Springer), Lougheed addresses the worry about future epistemic benefits impacting the status of one’s present belief as rational. But his response seems to me to connect epistemic goodness with epistemic rationality too closely, and it’s not clear how his response there carries over here in the case of worldview belief and the alleged epistemic benefits of theoretical virtue understanding.



Categories: Critical Replies

Tags: , , , ,

Leave a Reply

Discover more from Social Epistemology Review and Reply Collective

Subscribe now to keep reading and get access to the full archive.

Continue reading