Hostname: page-component-848d4c4894-wg55d Total loading time: 0 Render date: 2024-05-10T00:21:37.074Z Has data issue: false hasContentIssue false

Partisan Epistemology and Misplaced Trust

Published online by Cambridge University Press:  02 October 2023

Boyd Millar*
Affiliation:
Trent University, Peterborough, K9L 0G2, Canada
Rights & Permissions [Opens in a new window]

Abstract

The fact that each of us has significantly greater confidence in the claims of co-partisans – those belonging to groups with which we identify – explains, in large part, why so many people believe a significant amount of the misinformation they encounter. It's natural to assume that such misinformed partisan beliefs typically involve a rational failure of some kind, and philosophers and psychologists have defended various accounts of the nature of the rational failure purportedly involved. I argue that none of the standard diagnoses of the irrationality of misinformed partisan beliefs is convincing, but I also argue that we ought to reject attempts to characterize these beliefs as rational or consistent with epistemic virtue. Accordingly, I defend an alternative diagnosis of the relevant epistemic error. Specifically, I maintain that such beliefs typically result when an individual evaluating testimony assigns more weight to co-partisanship than he ought to under the circumstances, and consequently believes the testimony of co-partisans when better alternatives are available.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

1. Introduction

A good many people believe quite a bit of the misinformation they encounter. For instance, a significant number of Americans believe that anthropogenic climate change is not occurring, that COVID-19 is no more dangerous than the flu, and that Joe Biden won the 2020 presidential election due to fraud. To explain why so many people believe such conspicuous misinformation (i.e., claims concerning which there is a considerable amount of publicly available evidence establishing that they are false), we should begin with the fact that these misconceptions are not evenly distributed throughout the population. A recent Pew survey found that while 84% of self-described liberal Democrats said that human activity contributes “a great deal” to climate change, 84% of self-described conservative Republicans rejected this claim.Footnote 1 A Gallup poll conducted in September 2020 found that while 40% of Republicans said that the flu causes more deaths than COVID-19, only 13% of Democrats said the same.Footnote 2 And a UMass poll conducted in December 2021 found that while 91% of Democrats said that Joe Biden's victory in the presidential election was “probably” or “definitely” legitimate, 71% of Republicans said that his victory was “probably” or “definitely” illegitimate.Footnote 3, Footnote 4

Both Democrats and Republicans, conservatives and liberals, are exposed to misinformation concerning climate change, COVID-19, election integrity, and other important topics. So, why is it that conservatives are much more likely than liberals to believe the misinformation they encounter on these topics? The most plausible explanation is that individuals with different political views seek out different sources of information and trust different individuals and groups. In particular, each of us has significantly greater confidence in the claims of co-partisans, or those belonging to groups with which we identify – people who share our religious affiliation, our political ideology, who belong to the same political party, and so on. And we are especially prone to accept claims made by prestigious co-partisans – politicians, celebrities, television pundits, radio and podcast hosts, and the like. Following Rini (Reference Rini2017), we can call this general phenomenon – individuals granting greater credibility to co-partisans – partisan epistemology; and when this increased credibility plays a significant role in an individual believing some co-partisan's claim (or when decreased credibility plays a significant role in an individual disbelieving an anti-partisan's claim), we can call the resulting belief a partisan belief.

The fact that each of us is prone to hold partisan beliefs explains, in large part, why so many people believe a significant amount of the misinformation they encounter. For instance, presumably large numbers of Republican voters believe that the 2020 presidential election was illegitimate because they trusted Donald Trump when he claimed that fraud determined the outcome; presumably, large numbers of conservatives believe that COVID-19 vaccines are not safe and effective because they trusted at least some of the many prominent conservative pundits who disputed the safety and efficacy of these vaccines; and presumably, large numbers of American conservatives believe that anthropogenic climate change is not occurring because they trust conservative climate change deniers and distrust the liberal politicians and pundits who have been the most prominent advocates for the issue in the United States.Footnote 5

The claim that our tendency to form partisan beliefs explains, in large part, the prevalence of such misconceptions, should not be terribly controversial – it is supported by a considerable amount of empirical evidence.Footnote 6 A more difficult question is whether forming or maintaining a partisan belief in misinformation – or misinformed partisan belief – is typically (or at least very often) rational. In other words, the question is: when you believe some piece of misinformation, that p, in significant part because some individual (or group) that you recognize as a co-partisan claims that p, is your belief that p typically rational? In such a case, have you done something you ought not to have done, epistemically speaking? Is your believing that p consistent with epistemic virtue, or are you exhibiting some epistemic vice? (Throughout, I will treat being rational, being consistent with epistemic obligations, and being consistent with epistemic virtues, as different ways of characterizing the same property.)

It's natural to assume that misinformed partisan beliefs typically involve a rational failure of some kind. Philosophers and psychologists have defended various accounts of the nature of the rational failure purportedly involved: some maintain that the process via which such beliefs are typically formed involves motivated reasoning; others maintain that the source of such beliefs is typically some epistemic vice, such as gullibility or epistemic laziness. However, one might reject this verdict altogether; one might claim that typically when an individual forms a misinformed partisan belief, she follows a rational procedure that just happens to produce a bad outcome due to unfortunate circumstances. This view has fewer defenders; but Rini (Reference Rini2017) and Levy (Reference Levy2019, Reference Levy2022) have recently presented arguments suggesting (either directly or indirectly) that misinformed partisan beliefs are typically rational.

I will argue that none of the standard diagnoses of the irrationality of misinformed partisan beliefs is convincing, but I will also argue that we ought to reject attempts to characterize these beliefs as rational or consistent with epistemic virtue. Accordingly, I will defend an alternative diagnosis of the relevant epistemic error. Standard diagnoses claim that an individual forms a misinformed partisan belief because he either doesn't aim at accuracy or is careless with respect to the accuracy of his beliefs. Defenders of partisan belief reply, correctly, that relying on the testimony of co-partisans is generally a reasonable strategy for acquiring accurate beliefs. Even so, I maintain that when individuals form or maintain misinformed partisan beliefs, the strategy they employ to acquire accurate beliefs nonetheless involves a significant error.Footnote 7 Specifically, I maintain that such beliefs typically result when an individual evaluating testimony assigns more weight to co-partisanship than he ought to under the circumstances and consequently believes the testimony of co-partisans when better alternatives are available. In other words, forming or maintaining a misinformed partisan belief is typically inconsistent with an epistemic virtue we can call discernment: with respect to the testimony she receives, the discerning individual allocates trust appropriately – she trusts (and distrusts) the right sources on the right topics to the right extent.Footnote 8 Misinformed partisan beliefs, then, typically involve individuals exhibiting epistemic vice by being undiscerning: they trust sources they ought not to trust on topics and under circumstances that they ought not to trust them, and they distrust sources that they ought to trust on topics and under circumstances that they ought to trust them.

2. Misinformed partisan beliefs

The question at hand – whether forming or maintaining a misinformed partisan belief is typically rational – is only particularly important so long as our tendency to form partisan beliefs explains, in large part, the widespread belief of misinformation. Before turning to our central question, then, we should review the evidence for the claim that partisan belief indeed explains this phenomenon. Accordingly, in the present section, I will first describe some important empirical findings concerning the nature and extent of partisan beliefs, and I will then review evidence suggesting that beliefs of consequential misinformation – for instance, misinformation concerning COVID-19 – are very often produced, in large part, by our elevated trust of co-partisan testimony.

2.1. Partisan beliefs

Individuals who belong to different social groups often have widely divergent beliefs, not only concerning normative matters but also regarding non-normative facts. For instance, members of different religious communities often have very different beliefs regarding how and when the earth was created, and members of different political parties often have different beliefs regarding such things as the source and extent of socioeconomic inequality.Footnote 9 How such differences arise isn't a mystery – the intuitive explanation is that members of different groups believe different things because they rely on and trust different sources of information. Someone who believes that the earth is 6000 years old presumably does so because he trusts the testimony of certain religious authorities, and distrusts the testimony of geologists; individuals who don't belong to the relevant religious group won't trust these same religious authorities and, as a result, won't end up with the same beliefs on the subject.

To fill in some of the details of this intuitive picture, we can look to psychological research concerning how individuals respond to testimony. We all receive a great deal of testimony from a great many sources, and we don't treat each piece of testimony that we receive as epistemically on par – rather, we filter testimony based on a wide variety of factors. Some of these factors concern the testimony's content and – crucially for present purposes – others concern the source. In the present context, there are two features of sources that we use to filter testimony that are particularly important: competence (i.e., the extent to which the source is likely to possess accurate information) and benevolence (i.e., the extent to which the source is likely to be guided by our interests).Footnote 10

We rely on a variety of cues to determine whether some source of information is competent with respect to a given topic. For instance, research in developmental psychology reveals that children accumulate more and more sophisticated methods for identifying competent testifiers as they age. Children as young as infants prefer to receive guidance from individuals who have displayed relevant expertise and learn to ignore individuals whose feedback has proved unreliable.Footnote 11 And older children are sensitive not only to a speaker's track record of inaccurate claims but also to whether there are circumstances that may excuse previous false statements.Footnote 12 As they develop, children also learn to rely heavily on certain social cues. They are more likely to accept a speaker's claim when it is consistent with the consensus amongst some relevant group, and they are more likely to accept the claims of speakers who have high social status within a group – for example, dominant or prestigious individuals.Footnote 13

In addition to being competent, it's important to establish that a testifier is benevolent toward us – after all, a competent source might still seek to mislead or manipulate us. As one would expect, then, research in developmental psychology reveals that children learn to rely on a variety of cues to determine whether some source of information is benevolent. For instance, children prefer to receive information from their parents and from teachers that they know personally; and they are more likely to accept information from individuals who have been described as kind or honest, and who belong to their own social group.Footnote 14 Adults, as well as children, sometimes rely on traits such as age, race or gender to establish that someone belongs to their social group; but, as Mercier (Reference Mercier2017: 106) notes, we typically utilize “more reliable cues, when such cues are available.” According to Mercier, because individuals belonging to the same significant social group or “coalition” will “tend to share more interests than random individuals,” such individuals “should be more benevolent toward one another”; and consequently, evidence that someone belongs to a social group that I belong to is evidence that she is likely to be benevolent toward me (Reference Mercier2017: 106).

Because individuals, and especially prestigious individuals, who belong to social groups with which we identify are likely to possess more traits indicating competence and benevolence, we should expect individuals to place greater trust in the testimony of co-partisans – and, as such, we should expect individuals' beliefs to be strongly influenced by the views of co-partisans. And, indeed, there is considerable direct evidence that information concerning what co-partisans believe has a significant impact on what we believe. For instance, research shows that information concerning the views of co-partisans, and especially co-partisan elites, has a significant influence on an individual's evaluation of specific public policies. As Barber and Pope (Reference Barber and Pope2019) found, when an individual learns that some prestigious co-partisan supports a given policy, that individual will be significantly more likely to form positive views of that policy – even when it doesn't match their predominant ideological commitments. In particular, these researchers found that when Republicans were told (contrary to fact) that Donald Trump supported a stereotypically liberal policy, they were 15% more likely to endorse that policy than they would have been otherwise.

In fact, when an individual evaluates a given policy, information concerning the views of prestigious co-partisans can overpower more direct policy-relevant information. For example, Druckman et al. (Reference Druckman, Peterson and Slothuus2013) presented individuals with a pair of arguments concerning a given policy; one argument supported the policy, while the other attacked it, and one argument was quite strong, while the other was manifestly weak. They found that in the absence of partisan cues, individuals' views moved significantly in the direction of the stronger argument; but when individuals were told that the policy which the weaker argument supported was endorsed by co-partisan members of Congress (and opposed by members of Congress from the other party), not only did their views of the policy not move in the direction of the stronger argument, but they moved significantly in the direction of the weaker argument. Relatedly, a recent study found that most of the influence that party cues had on the policy judgements of individuals who were not provided with independent information about the relevant policy was retained for individuals who were provided with such information.Footnote 15 (It's worth noting that information concerning the views of anti-partisans has been shown to have the opposite influence on individuals' beliefs. For instance, Bail et al. (Reference Bail, Argyle, Brown, Bumpus, Chen, Hunzaker, Lee, Mann, Merhout and Volfovsky2018) found that when Republicans were repeatedly exposed to the opinions of stereotypical liberals, their beliefs became more conservative as a result.)Footnote 16

If information concerning what co-partisans believe influences an individuals' beliefs by virtue of the fact that co-partisans possess features indicating competence and benevolence, then we should find that this influence increases as co-partisan trust increases. And, as it happens, that is precisely what we find. As Bolsen et al. (Reference Bolsen, Druckman and Cook2014: 258–59) found, partisan cues have much less influence on the beliefs of weak supporters of a given political party and much greater influence on the beliefs of strong partisans. More specifically, they found that the more an individual trusted a particular party on some policy question, the more that individual's evaluation of a given policy was impacted by co-partisan endorsements. So, for instance, they asked participants: “to what extent do you trust members of your political party to provide good advice about which energy policies to support?” And (perhaps unsurprisingly) they found that the more an individual trusted co-partisans, the more information about co-partisans' views influenced that individual's evaluations of a specific energy policy.

Research by psychologists and political scientists, then, suggests that partisan beliefs are common: very often individuals form or maintain the belief that p in significant part because individuals (or groups) that they recognize as co-partisans claim that p. The existing research also suggests that individuals tend to believe co-partisans because they tend to trust co-partisans; yet this fact doesn't settle the specific mechanism via which partisan beliefs are generated.Footnote 17 One possibility is that individuals use information about what co-partisans believe as a heuristic or shortcut when deciding what to believe.Footnote 18 That is, individuals might adopt the beliefs of co-partisans because they don't possess much knowledge on a given topic and want to avoid expending the time and energy that would be required to obtain such knowledge. However, this account of partisan belief is in tension with some of the existing evidence. For instance, if partisan cues operate as heuristics, then relying on such cues should speed up cognitive processing; yet some experiments have found that when individuals rely on information concerning the views of co-partisans, they take longer to make up their minds.Footnote 19 And if individuals use partisan cues to stand in for their lack of background knowledge, then individuals with less relevant knowledge should be more likely to be influenced by partisan cues; but such cues often have a greater influence on individuals with more relevant knowledge.Footnote 20 Such evidence suggests an alternative mechanism: individuals don't typically defer – they don't disregard whatever first-order evidence they possess and simply adopt the beliefs of co-partisans; rather, individuals typically combine information concerning the beliefs of co-partisans with whatever relevant first-order evidence they possess. Yet, because co-partisan testimony is assigned so much importance, it typically plays a decisive role relative to other sources of information when it comes to determining what individuals believe.

At present, the evidence supporting each of these competing accounts is mixed.Footnote 21 And in any case, it isn't particularly plausible that there is a single mechanism via which individuals' beliefs are influenced by co-partisan testimony. While it seems clear that individuals don't rely on information about the beliefs of co-partisans primarily as a shortcut to avoid acquiring their own first-order evidence, it is highly plausible that they utilize just such a shortcut with respect to certain topics and under certain circumstances. The ultimate moral, then, is that an individual's beliefs are often determined in significant part by co-partisan testimony and that there are multiple mechanisms via which such testimony influences an individual's beliefs.

2.2. Misinformed beliefs

We've seen that the testimony of co-partisans, and especially prestigious co-partisans, has a powerful impact on what we believe. For instance, knowing what a high-ranking member of our preferred political party believes can cause us to support a policy in tension with our values or to accept a conclusion against which we have strong arguments. It's hardly surprising, then, that our reliance on co-partisan testimony can sometimes produce bad results, and one particularly bad result is when co-partisan testimony leads us to believe misinformation.

False beliefs concerning COVID-19 constitute particularly compelling evidence that misinformed beliefs are very often produced, in large part, by our elevated trust in co-partisan testimony. The pandemic is a vitally important issue relevant to everyone: the virus poses a direct threat to almost everyone's health and, at least indirectly, has had a significant negative impact on almost everyone's well-being. And while there is a wealth of easily accessible accurate information concerning the risks of the virus and methods for mitigating that risk – including, especially, the safety and efficacy of vaccines – large numbers of false beliefs concerning COVID-19 remain both widespread and stubbornly persistent. Very many people believe that COVID-19 is no more dangerous than the flu, that deaths from the virus have been exaggerated by public authorities, that the existing vaccines are not safe and effective, and so on; such beliefs persist despite the fact that these claims can be quickly debunked by consulting easily accessible sources.

There are good reasons for thinking that the majority of individuals who believe misinformation about COVID-19 do so, in large part, because they trust the testimony of co-partisans and distrust the testimony of medical experts. First, from the very beginning of the pandemic, conservative politicians and pundits circulated large quantities of misinformation concerning the virus; such individuals' messages diverged sharply from those of both liberal politicians and pundits, and medical experts. Most consequentially, the highest-profile member of the Republican Party, Donald Trump, regularly propagated misinformation in his various public statements: he downplayed the risk the virus posed, compared it to the flu, referred to it as a “hoax,” mocked mask-wearing, and promoted unproven and often ridiculous potential treatments. In stark contrast, prominent medical authorities and liberal politicians and pundits were largely aligned in their claims that the virus was significantly more dangerous than the flu, that mitigation efforts such as social distancing and mask-wearing were vitally important and, later, that everyone should get vaccinated.Footnote 22 Mainstream news organizations exacerbated the politicization of the pandemic by focusing heavily on the views of politicians and on disagreements between liberals and conservatives. In fact, research shows that in the early stages of the pandemic, newspapers spent significantly more time covering the views of politicians than those of scientists, while television news coverage devoted roughly equal time to politicians and medical experts.Footnote 23 In addition, media outlets trusted by conservatives were especially prone to distribute misinformation about the virus. For instance, popular outlets such as Fox News frequently presented false claims about COVID-19 and often suggested that concerns about the virus were overblown and designed to hurt President Trump.Footnote 24 Right-wing media outlets and conservative politicians and pundits also attempted to prevent experts from effectively refuting their false claims by attacking the credibility of medical experts (such as Anthony Fauci) and government organizations (such as the CDC).Footnote 25

Perhaps unsurprisingly, a stark division arose amongst ordinary Americans: one group trusted conservative elites on the topic of COVID-19 (and distrusted liberal elites and medical experts), and another trusted liberal elites and medical experts (and distrusted conservative elites). A 2020 survey found that only 6% of Democrats said that they trusted Trump either “a great deal” or “a good amount” regarding COVID-19, while 86% of Republicans said the same.Footnote 26 This same study found that 88% of Democrats and 17% of Republicans claimed to trust a Democratic governor on the topic, and it found that while 92% of Democrats professed to have at least a good amount of trust in “medical experts,” only 36% of Republicans said the same. As one might expect, this divergence regarding who individuals trusted was associated with different beliefs concerning COVID-19 – and conservatives were significantly more likely to believe misinformation about the virus. For instance, Motta et al. (Reference Motta, Stecula and Farhart2020) found that individuals who primarily trust and regularly consume right-wing media were significantly less concerned about COVID-19 compared to consumers of traditional media, and were more than twice as likely to endorse misinformation related to the virus. Relatedly, Merkley and Loewen (Reference Merkley and Loewen2021) found that individuals who distrust medical experts were significantly more likely to exhibit false beliefs about COVID-19. And Calvillo et al. (Reference Calvillo, Ross, Garcia, Smelter and Rutchick2020) found that the more politically conservative someone was, and the more they approved of Trump, the less they understood the virus, and the more likely they were to endorse misinformation. A similar study found that COVID-19 skepticism and vaccine hesitancy were associated with the strength of a person's conservativism in an approximately linear fashion, and crucially, that this relationship was not significantly weakened either by an individual's education level or by a history of personal experience with the disease.Footnote 27

Given this evidence, the best explanation for why so many individuals ended up believing misinformation concerning the coronavirus is that they trusted the conservative politicians and pundits making false claims, and distrusted the liberal politicians and medical experts whose public statements might have countered these same falsehoods. After all, misinformed beliefs were concentrated amongst that segment of the population that trusted the conservative politicians and pundits who were the most effective purveyors of such misinformation, and it is not clear how anything other than trust in co-partisan testimony could plausibly explain how this group of people formed and maintained the relevant false beliefs.Footnote 28 There's nothing intrinsic to conservatism that would incline someone to believe that COVID-19 is no more dangerous than the flu, or that public health authorities have exaggerated the numbers of COVID-related deaths. Very many conservatives have relevant background knowledge of science, and most are in the habit of listening to medical experts on a wide variety of subjects. Moreover, conservatives are typically more sensitive to threats and, at least in the United States, have had more personal experience with the virus's negative impacts on average. And while it's true that conservatives were exposed to much more misinformation about the virus than non-conservatives, they would not have encountered as much misinformation as they did, and would not have been influenced by this misinformation to the extent that they were, if they did not trust conservative sources. We should conclude, then, that the majority of individuals who formed misinformed beliefs of the sort at issue did so, in significant part, because they trusted the testimony of co-partisans (and distrusted the testimony of anti-partisans). In other words, most of these misinformed beliefs are partisan beliefs.

Neither should we regard the COVID-19 pandemic as unique. For instance, prior to the politicization of the issue in the 1990s, American conservatives and liberals were almost equally likely to accept the scientific consensus regarding climate change; but then the public began to receive very different messages from conservative politicians and pundits on the one hand, and liberal politicians and pundits on the other. The existing evidence suggests that large numbers of American conservatives have come to believe that anthropogenic climate change is not occurring largely due to the divergent messages they have received from political elites – and because they trust conservative elites and distrust liberal elites.Footnote 29 In addition, a similar picture has recently emerged with respect to beliefs about election integrity. Once again, it is primarily conservatives who believe misinformation regarding 2020 election fraud; and once again, the best explanation for why so many individuals believe the relevant falsehoods is that they trust the testimony of prominent conservatives (and distrust the assurances of prominent liberals and relevant experts). For instance, research has found not only that an overwhelming majority of Trump voters believe that Trump lost the election only due to fraud but also that Trump voters who are knowledgeable about politics and who follow the news closely are more likely to believe this falsehood.Footnote 30 In fact, a recent experiment demonstrated that, amongst Trump supporters, exposure to Trump's false claims about election fraud directly results in increased confidence that the election was fraudulent.Footnote 31 Taken together, then, these prominent cases of large numbers of individuals believing consequential misinformation suggest a general conclusion: our tendency to form partisan beliefs explains, in large part, the widespread belief in misinformation.

3. Standard diagnoses

It's natural to assume that believing misinformation on the basis of co-partisan testimony involves a rational failure of some sort. After all, misinformation consists of claims at odds with the existing evidence – evidence that will typically have been distributed widely in popular media, and which is otherwise readily accessible with a little bit of research. So, it's natural to assume that, for example, when Trump claims that COVID-19 is no more dangerous than the flu, no one should take his word for it. One need only spend a few minutes consuming mainstream news, or consulting the CDC website, to learn that medical experts maintain that COVID-19 is much more dangerous than the flu – and that information ought to lead anyone to reject Trump's testimony on the matter. However, in order to substantiate this natural assumption, we need a specific diagnosis of the rational error that misinformed partisan beliefs involve; and in fact, the most common diagnoses of this error are all unsatisfying.

First, according to what is probably the most common account, misinformed partisan beliefs are irrational in virtue of involving directional motivated reasoning. Motivated reasoning, here, refers to an individual responding to information with some aim other than accuracy – rather than attempting to discover the truth, the individual attempts to elevate his in-group, safeguard certain values, preserve a certain cherished belief, or something of the sort.Footnote 32 With respect to the present topic, this account claims that when an individual believes misinformation because she recognizes the source to be a co-partisan, typically she aims at some end other than accuracy.Footnote 33 For example, one might claim that in such cases, an individual believes what she does because she wants to affirm her standing within a certain social group: someone who believes that COVID-19 is no more dangerous than the flu on the grounds that prominent Republicans have made this claim does so because she aims to believe what Republicans believe, so as to maintain her status as a staunch Republican. And any such belief is irrational because, epistemically speaking, one always ought to aim at accuracy when forming or maintaining beliefs.

The principal problem with the claim that misinformed partisan beliefs typically arise from motivated reasoning is that it lacks compelling empirical support. With respect to forming or maintaining beliefs of consequential misinformation, we don't have good evidence that individuals are often guided by some end other than accuracy.Footnote 34 The difficulty with acquiring such evidence is that, for most experiments attempting to determine how individuals' beliefs are impacted by partisan testimony, the results can be interpreted, equally reasonably, either in accordance with an accuracy motive or some other motive. For instance, suppose some committed partisan believes prominent Republicans when they claim that the COVID-19 death rate has been exaggerated. Does she believe this claim because she wants to believe whatever Republicans believe (regardless of whether it's true), or because she believes that whatever Republicans believe is likely to be true? It's difficult to design an experiment that would definitively establish one of these competing explanations. Moreover, even if one were to design an experiment establishing that, under special conditions, individuals aim at some end other than accuracy, such a result still wouldn't establish that, under normal conditions, partisan beliefs typically result from such a motivation.

Consider, for example, a recent study – Bayes et al. (Reference Bayes, Druckman, Goods and Molden2020) – that was specifically designed to address the “observational equivalence problem” plaguing research in this area. The researchers' strategy was to manipulate the motivation of their subjects – which were comprised of self-identified Republicans exclusively – before exposing them to messages about climate change. Specifically, the researchers first attempted to induce a certain motivation in their subjects, and then attempted to identify this motivation's impact on the subjects' beliefs by comparing how individuals responded to messages in the absence of the induced motivation, and by comparing how individuals responded to messages that either matched or failed to match the induced motivation. For instance, some of the experimental subjects received a “group-identity threat prime,” while some did not; then, subjects were presented with one of a variety of messages designed to increase their belief in climate change, one of which directly appealed to their Republican identity. Yet, despite its careful design, there are at least two reasons for thinking that this experiment can't help us determine whether partisan beliefs are typically the result of motivated reasoning. First, the “group-identity threat prime” designed to cause individuals to worry about group cohesion would not only cause individuals to worry about cohesion amongst Republicans, but would also thereby make partisan consensus salient. The researchers' intent is to measure how worrying about partisan cohesion impacts individuals' responses to information; but instead they may be measuring how thinking about partisan consensus impacts individuals' responses to information. In other words, this experiment can't establish that individuals presented with the group-identity threat prime aim to affirm their standing amongst co-partisans (rather than aiming at accuracy) when responding to information: the group-identity threat prime might simply make individuals more likely to be guided by information concerning partisan consensus when attempting to form accurate beliefs about climate change. Second, the message designed to “match” the group-identity motivation was the following: that “the climate is changing, that contrary to many people's impressions a clear majority of Republicans agree with this fact, and also that many Republicans are taking action to combat climate change” (Bayes et al. Reference Bayes, Druckman, Goods and Molden2020: 1036–37). But even if it turned out that individuals who have been induced to be motivated to affirm their political identities are influenced especially strongly by this message, it wouldn't follow that, under ordinary conditions, the influence that such messages have on individuals' beliefs is primarily due to a similar motive.Footnote 35 That is, regardless of the results of this experiment, we would still be left with the question: typically, when an individual Republican's beliefs about climate change are impacted by information concerning the beliefs of co-partisans, is she influenced by this information because she aims to affirm her standing as a staunch Republican, or because she aims at accuracy and believes that what most Republicans believe is likely to be true?

Of course, it may well be that at least some individuals who hold misinformed partisan beliefs do so because they aim at some end other than accuracy. And future experiments may reveal that, at least under certain circumstances, individuals sometimes respond to certain specific pieces of information in a way that is explained best by, for instance, the aim of affirming their standing amongst co-partisans. But we ought to assume that individuals typically aim at accuracy when forming beliefs except in instances where we have compelling evidence to the contrary; and at present, we don't have anything approaching compelling evidence that misinformed partisan beliefs are typically the result of motivated reasoning.

Next, an individual who responds to information with the aim of acquiring true beliefs might nonetheless be insufficiently careful with respect to the truth of his beliefs; accordingly, one might suggest that misinformed partisan beliefs are irrational because they result from carelessness. For instance, a highly intuitive suggestion is that misinformed partisan beliefs are typically the result of gullibility. On at least one understanding of the term, being gullible is a matter of believing testimony too readily: to be gullible on a given occasion is to exhibit a kind of blind trust – to accept testimony that a more careful or skeptical individual would not accept.Footnote 36 Just so, one might suggest that when an individual believes prominent conservative pundits who claim that COVID-19 vaccines are dangerous and ineffective, she commits a rational error by gullibly accepting claims that she ought to be skeptical of.

However, this account is problematic because someone who accepts the testimony of recognized co-partisans does not simply believe whatever she's told – she believes the testimony of people she trusts. And neither is such an individual indiscriminate with respect to whom she trusts; rather, she uses information about partisanship to filter testimony in a systematic manner. As Levy (Reference Levy2019, Reference Levy2022) argues, someone who grants greater credibility to prestigious co-partisans employs a rational strategy: prestigious individuals are likely to know more than we do on a given topic, and co-partisans are more likely to share our values and less likely to try to mislead or manipulate us. Moreover, given how little most ordinary people know about the relevant topics, the content of the claims at issue are not so outlandish that they ought to arouse significant suspicion. For instance, the claim that a rather novel vaccine doesn't work very well and might have dangerous side effects isn't straightforwardly incompatible with most people's background beliefs. And so, when individuals accept such claims from trusted sources, they do not gullibly accept whatever they are told; rather, they accept claims that strike them as plausible from sources that they believe to be trustworthy, just as the most skeptical and least gullible of us do.

Alternatively, one might claim that relying on the testimony of co-partisans constitutes a kind of epistemic laziness.Footnote 37 That is, one might grant that the relevant individuals are not gullible, but nonetheless insist that relying so heavily on the testimony of co-partisans constitutes a kind of intellectual shortcut – one that is unreasonable at least with respect to important subjects of the sort at issue. In fact, certain empirical findings offer at least indirect support for such a view: research has shown that individuals who are more prone to engage in the right sort of reflection – as measured by performance on the Cognitive Reflection Test – and individuals who take the time to deliberate when presented with the relevant information, are better able to distinguish accurate news stories from stereotypical fake news stories.Footnote 38 In this spirit, the present account maintains that misinformed partisan beliefs are typically the result of laziness: these false beliefs could be avoided if individuals took the time to deliberate or reflect on the content of the relevant claims, or perhaps to conduct additional research.

However, this proposal is also unpersuasive. First, the suggestion that misinformed partisan beliefs could be avoided if individuals took the time to reflect or deliberate is problematic for a number of reasons. As we've seen (§2.1), individuals don't rely on co-partisan testimony primarily as an informational shortcut: for instance, providing individuals with information regarding the beliefs of co-partisans often increases processing time; and individuals with more background knowledge are often influenced by partisan cues to a greater extent. In addition, because many people often won't possess beliefs incompatible with the false testimony in these cases – sometimes because they lack any relevant beliefs, and sometimes because the false claims are consistent with existing false beliefs – they can't avoid believing the misinformation simply by taking the time to reflect on its content. For instance, most people don't know much about the science of COVID-19 vaccines, or the details of election security, so they can't filter out many false claims concerning these topics (one of the crucial differences between the sort of misinformation at issue and fake news is that stereotypical fake news is highly implausible by design). Moreover, the suggestion that someone who accepts co-partisan testimony but fails to conduct additional research exhibits epistemic laziness is clearly unreasonable. You don't exhibit any sort of negligence by accepting testimony from trusted sources, even when the subject matter is quite important. For instance, if you trust Anthony Fauci and he claims that COVID-19 is more dangerous than the flu, it's entirely rational for you to simply take his word for it and leave the matter there. In such a case, it's precisely because you trust the source that you are not rationally required to do any further research.Footnote 39

4. An alternative diagnosis

Many who find the foregoing arguments convincing will conclude that misinformed partisan beliefs must involve a rational error of some kind or other, and we just haven't found it yet. I agree that this is the appropriate conclusion; accordingly, in the present section, I will attempt to sketch a more convincing diagnosis of the rational error that misinformed partisan beliefs typically involve. And by way of approaching the issue, I will examine arguments in favor of the alternative verdict – arguments that attempt to show that misinformed partisan beliefs are typically rational or consistent with epistemic virtue. Perhaps, by determining where such arguments fall short, we can identify the rational error we're looking for.

If you are someone who, say, trusts conservatives and distrust liberals on some set of topics – if you believe that conservatives are reliable and liberals are unreliable – then there's an obvious sense in which it's reasonable for you to accept testimony from conservatives and reject testimony from liberals on those topics. However, the question at hand is whether it's reasonable for someone who, say, identifies as conservative, to believe that conservatives are reliable and liberals unreliable on topics of the sort at issue. To defend a positive answer to this question, one might appeal to an argument that Rini (Reference Rini2017) develops. According to Rini, it's reasonable to assign greater credibility to the testimony of co-partisans with respect to certain subjects – namely, normative subjects, and descriptive subjects relevant to political decision-making. With respect to normative subjects, Rini claims that when I learn that someone is a co-partisan, “I learn that she tends to get normative questions right (by my normative lights)”; and so, “she establishes herself as a more reliable normative judge than I would take her to be by default, or especially if she were affiliated to an opposed party” (Reference Rini2017: 51). With respect to politically relevant descriptive subjects, Rini claims that when I learn that someone is a co-partisan, I learn that she is likely to have better judgement regarding the “political importance” of a given piece of information than if she were not a co-partisan (Reference Rini2017: 52). As such, we have good reasons to regard co-partisans as more competent – more likely to possess accurate beliefs – than non-partisans and anti-partisans, with respect to many normative and non-normative topics.

A natural worry is that Rini hasn't made a strong case that co-partisanship is a good guide to competence with respect to politically relevant non-normative or descriptive matters. In particular, as Worsnip (Reference Worsnip, Fox and Saunders2019: 248) puts the point, “even if I think some source is reliable in making judgments about which descriptive claims are (if true) important, this provides no direct support for thinking that the source is reliable in determining which descriptive claims are true.”Footnote 40 Yet, one can bolster Rini's argument, as Levy (Reference Levy2023: 937) does, by noting that very many seemingly descriptive questions are “normatively inflected,” and that co-partisans tend to have similar beliefs on a wide range of normatively inflected descriptive topics. For instance, it would be reasonable for a conservative to believe that a liberal's political commitments are likely to interfere with his capacity to evaluate the evidence concerning climate change in an unbiased manner – and this belief provides a reason to downgrade one's assessment of the competence of anti-partisans on this descriptive topic.

Levy's point is surely correct; however, it doesn't carry much weight in the context of the sorts of misinformed beliefs at issue. The research surveyed above (§2.2) suggests that widespread misinformed beliefs are driven by the influence of prestigious co-partisans, such as political leaders and television pundits. Conservatives who believe that anthropogenic climate change is not occurring do so because they trust Republican politicians and distrust both Democratic politicians and establishment scientists; conservatives who believe that COVID-19 vaccines are dangerous and ineffective do so because they trust Tucker Carlson and distrust Anthony Fauci and the CDC; and conservatives who believe that election fraud is widespread do so because they believe Donald Trump, and podcast hosts like Steve Bannon, and distrust the relevant experts. Consequently, even if it's reasonable to downgrade one's assessment of the competence of any anti-partisan relative to any co-partisan on descriptive topics, it doesn't follow that it's rational for individuals to accept the specific co-partisan testimony that they do. For instance, even if you are a staunch conservative and regard Anthony Fauci as anti-Republican, you still know that he possesses the kind of training and experience required to evaluate the safety of vaccines; and conversely, even if you believe that Tucker Carlson is in a much better position than you are to evaluate the safety of vaccines (because he possesses superior judgement and has special access to information and advice from experts), you know that he possesses no relevant training and experience. So, even if you believe that Fauci is biased, you ought to assume that, relative to Carlson, he is significantly more likely to accurately determine what the evidence reveals regarding the safety of COVID-19 vaccines. To reasonably judge that Carlson is more competent on this topic than Fauci (and other such scientists), a conservative would need to possess good reasons for believing that Carlson is well-positioned to determine that the consensus amongst mainstream or establishment scientists is mistaken, and that the dissenting voices questioning the safety of these vaccines are correct – to determine that certain self-professed experts who reject the mainstream consensus are in fact the genuine experts on this particular topic. But conservatives don't have good reasons for believing any such thing. In particular, these conservatives possess no evidence that one can accurately determine when mainstream science has gone awry, and when it's better to accept the claims of fringe or “anti-establishment” scientists, without the kind of special training and knowledge that Carlson lacks. Ultimately, then, even if his co-partisan status entails that you should boost your judgement of Carlson's competence (and so give his testimony more weight than you otherwise would), and even if his anti-partisan status entails that you should downgrade your judgement of Fauci's competence (and so give his testimony less weight than you otherwise would), it still isn't reasonable for you to regard Carlson as more competent than Fauci on this specific topic.

Considered in isolation, then, the connection between co-partisanship and competence does not make it reasonable to privilege the testimony of co-partisan politicians and pundits over that of anti-partisan scientists. However, there is another, more promising option for someone aiming to defend the rationality of misinformed partisan beliefs. As we noted above, Levy (Reference Levy2019, Reference Levy2022: 81–84) claims that it's rational to filter testimony on the basis of competence and benevolence taken together. He claims, further, that because co-partisans, and especially prestigious co-partisans, will typically score highest when competence and benevolence are considered in conjunction, it will typically be rational to privilege the testimony of prestigious co-partisans. (And, conversely, because anti-partisans will typically score much lower when competence and benevolence are considered together, it will typically be rational to give much less weight to the testimony of anti-partisans.) The principal appeal of such a strategy is that it's highly plausible that individuals belonging to the same social groups are likely to share substantive interests; as Levy (Reference Levy2022: 82) says, “those who don't share my values may seek to exploit me, and those on my side are likely to be more trustworthy (toward me).” Accordingly, it's highly plausible that when you have good evidence that certain individuals are both competent with respect to some topic and benevolent toward you, you have good evidence that what they tell you is true – and so it's rational to believe them. If, then, Levy is correct that individuals who believe misinformation of the sort at issue typically do so on the basis of testimony from co-partisans who exhibit the strongest signs of competence and benevolence (considered together), then such beliefs may well be rational.

However, this strategy is ultimately unsuccessful for two reasons. First, while benevolence is crucially important to reliability, it's not clear that it's sufficiently important to outweigh competence disparities of the sort at issue. As we've just noted, widespread misinformed beliefs are driven by the influence of prestigious co-partisans, such as political leaders and television pundits. While such individuals exhibit unmistakable signs of benevolence toward their fellow conservatives, they possess few traits that indicate that they are competent with respect to complex scientific topics such as climate change and COVID-19 – they might appear to be more competent than the average person due to the fact that they have special access to information and advice from experts, but they don't appear to possess the training and experience that scientists possess. Conversely, climate scientists and medical experts may possess traits that indicate a lack of benevolence toward conservatives, but they also possess traits indicating that they are significantly more competent than politicians and pundits with respect to these complex scientific topics. Plausibly, given a choice between competing testimony from a wholly benevolent politician or pundit and a non-benevolent but highly competent scientist, the rational response is to privilege the testimony of the latter.Footnote 41 (That is, it's plausible that the risk that the scientist who is not benevolent toward you will mislead you is not sufficient to outweigh the likelihood that that scientist has accurate beliefs and the politician or pundit does not.)

Second, a speaker's benevolence toward you isn't terribly important when it comes to public statements addressed to co-partisans and anti-partisans alike. The risk that a non-benevolent speaker will intentionally mislead or manipulate you is only significant when that speaker is specifically addressing you (or people like you) and can tailor his message to you – there simply isn't much danger that an anti-partisan speaker will mislead or manipulate you by intentionally communicating false information when that speaker is addressing her co-partisans as well. For instance, suppose you are a staunch conservative and regard Fauci as anti-Republican; accordingly, you suspect that he is liable to try to mislead you, so as to exploit you in some way. Even so, you don't have a good reason to doubt his public statements about the dangers of COVID-19. Your reasons for thinking that Fauci is not benevolent toward Republicans are also reasons for thinking that he is benevolent toward Democrats; so, you ought to assume that he will not attempt to mislead or manipulate Democrats. Now, such a conservative might insist that Fauci (and other mainstream scientists) is the kind of individual who would willingly mislead co-partisans when doing so would help them achieve their goals. However, such a suggestion threatens to undermine the rationality of granting greater credibility to the testimony of conservative politicians and pundits – if liberals can't reasonably assume that prestigious co-partisans will not attempt to manipulate them thanks to the fact that they are co-partisans, then presumably conservatives can't either. The difficulty with this suggestion could only be avoided if conservatives had good reasons for believing that, at least with respect to the topics at issue, liberal scientists are likely to try to manipulate both co-partisans and anti-partisans, whereas conservative politicians and pundits are not; but, while many conservatives may well believe this claim, we should deny that they have good reasons for believing it.Footnote 42 Ultimately, then, a staunch conservative ought to assume that Fauci would not intentionally make false public statements about the dangers of COVID-19, since such statements would mislead Democrats as readily as they would mislead Republicans.

At this point, one might object that, at least from the perspective of the conservatives who believe the relevant misinformation, climate scientists and medical experts don't exhibit signs of competence, while co-partisan politicians and pundits do exhibit such signs – in which case, their response to the testimony they receive from each is at least subjectively rational. However, we should reject this suggestion as well. It may be that individuals who believe the relevant misinformation often judge that Republican politicians and conservative pundits are more competent (or approximately as competent) as climate scientists and medical experts, but this conclusion isn't supported by the evidence they possess. On the one hand, they know that assessing these complex questions requires specialist knowledge and training, and they know that politicians and pundits don't possess any such knowledge and training (and if they don't explicitly believe these things, they could acquire the relevant beliefs via reflection). And, on the other hand, they possess a wealth of evidence that scientists specializing in a given field are capable of assessing questions related to that field – most every product a person uses, or medication she takes, provides evidence of, and reveals confidence in, some specific scientific discipline. Moreover, while it's true that misinformed individuals often believe that climate scientists who endorse anthropogenic climate change and medical scientists who endorse COVID-19 vaccines are not competent, such beliefs are typically based on the testimony of co-partisan politicians and pundits – and misinformed individuals do not possess evidence suggesting that co-partisan politicians and pundits are competent to evaluate the abilities of the relevant experts. Accordingly, misinformed individuals who believe that these scientists are not competent are irrational, even by their own lights.

(Alternatively, one might suggest that pseudo-experts and fringe scientists exhibit signs of competence and benevolence, at least from the perspective of the relevant misinformed individuals – and so misinformed partisan beliefs based on the testimony of such sources are reasonable.Footnote 43 There are at least two difficulties with this suggestion. First, there is little evidence that quasi-experts play a significant role in producing misinformed partisan beliefs. For example, Trump didn't need to summon the support of purported experts to convince his supporters that the dangers of COVID-19 had been exaggerated or that the 2020 election was stolen. Insofar as pseudo-expert testimony plays any role at all, its role is to rationalize what misinformed individuals already believe. Second, when misinformed individuals accept testimony from quasi-experts and fringe scientists, they typically do so on the basis of one of two reasons: they judge that these purported experts are competent because they have been explicitly endorsed by conservative politicians or pundits, or they judge that they are competent because their testimony confirms what prestigious conservatives have been saying on that topic. However, neither of these methods for determining a purported expert's competence on some complex empirical question is either subjectively or objectively reasonable.)

We're now in a position to draw a moral from the foregoing discussion. Levy provides the most plausible defense of the reasonableness of misinformed partisan beliefs – namely, that such beliefs result from individuals assessing testimony based on cues for both competence and benevolence. We should agree that filtering testimony on the basis of competence and benevolence is a rational method in general; and we should agree that co-partisanship is a reliable indicator of at least some degree of competence and benevolence; but we should nonetheless insist that, typically when individuals acquire misinformed beliefs on the basis of co-partisan testimony, the specific application of this method in the context in which it is employed involves a rational error. In general terms, misinformed partisan beliefs typically occur because co-partisanship plays a role in filtering testimony that it shouldn't play. More specifically, these beliefs typically occur because, when responding to testimony, an individual either overweighs benevolence and underweights competence, or relies on cues for competence that are not reliable indicators of competence. For example, some individuals may value benevolence so highly that they treat co-partisanship as a necessary condition for accepting testimony, and so reject the testimony of anti-partisans regardless of a given source's competence; some individuals may regard co-partisanship to be a much more important indicator of competence with respect to complex scientific questions than it actually is; and so on.

Characterized in terms of epistemic virtue and vice, misinformed partisan beliefs typically result when an individual has been undiscerning with respect to how he responds to testimony. With respect to the testimony he receives, the discerning individual allocates trust appropriately – he trusts (and distrusts) the right sources on the right topics to the right extent. The foregoing discussion, then, suggests that a discerning individual will rely on information concerning co-partisanship when assessing testimony, but only under appropriate circumstances and only to an appropriate extent. For example, suppose that an individual is assessing the competing testimony of two people who are exactly alike with respect to every other sign of competence, but where one is a co-partisan and the other an anti-partisan. The discerning individual will assume that the co-partisan is somewhat more likely to be competent, and significantly more likely to be benevolent toward her; consequently, she will upgrade her assessment of the co-partisan's testimony to some extent, and downgrade her assessment of the anti-partisan's testimony to some extent. Conversely, when an individual accepts Trump's testimony concerning COVID-19's dangerousness and rejects Fauci's competing testimony, she exhibits epistemic vice: for instance, she might fail to attend to the fact that, at least in the present context, the apparent gulf in benevolence that separates Trump and Fauci (due to Trump's co-partisanship and Fauci's anti-partisanship) is dramatically less important than the gulf in competence that separates them. Such a mistake is inconsistent with the epistemic virtue of discernment.

5. Conclusion

Ultimately, then, we should conclude that typical misinformed partisan beliefs exhibit irrationality or epistemic vice, but not in the manner that we might have assumed. We shouldn't claim that most misinformed individuals engage in motivated reasoning, since they use information about co-partisanship as a method to identify accurate testimony. We shouldn't charge them with gullibility because they use information about co-partisanship to filter testimony in a systematic way. And we shouldn't charge them with laziness, since they combine information about co-partisanship with other available evidence in order to better assess testimony. Yet, even so, misinformed partisan beliefs typically result from a rational mistake. While it's reasonable for information concerning co-partisanship to influence our assessments of who to trust and who not to trust, in cases of the sort at issue, information concerning co-partisanship has an influence that it shouldn't have in the relevant context. That is, misinformed partisan beliefs typically result from a failure of discernment – a failure to allocate trust appropriately when assessing testimony.

The widespread belief of consequential misinformation is not just an individual epistemic failing; it is a significant social problem with a wide variety of social costs. If the present diagnosis of the primary source of misinformed partisan beliefs is correct, then whether anything can be done to mitigate the significant harms of such beliefs depends on whether anything can be done to make individuals more discerning, or to encourage them to exhibit discernment more frequently. Given that we are social creatures who derive the vast majority of our knowledge from testimony, discernment is quite plausibly the most important epistemic virtue that we possess; even so, the fact that misinformed partisan beliefs are presently so widespread suggests that large numbers of people often fail to exercise this important epistemic virtue. As such, whether the right sort of education or training can make individuals more discerning – and, in particular, reduce their tendency to overvalue co-partisan testimony and undervalue anti-partisan testimony – is an especially pressing question.Footnote 44

Footnotes

1 Funk and Hefferon (Reference Funk and Hefferon2019).

2 Rothwell and Desai (Reference Rothwell and Desai2020).

3 Cuthbert and Theodoridis (Reference Cuthbert and Theodoridis2022).

4 Accurately determining what individuals believe concerning politically charged topics via surveys can be difficult due to the possibility of “expressive responding”; however, there is evidence that expressive responding does not play a significant role in the specific survey results at issue here. First, researchers have developed various techniques to minimize the influence of expressive responding when conducting surveys, and they find that these techniques do not lessen number of Republicans who endorse false claims regarding COVID-19 and the 2020 presidential election: see Cuthbert and Theodoridis (Reference Cuthbert and Theodoridis2022) and Fahey (Reference Fahey2023). Second, answers to survey questions of the sort at issue are connected to relevant behaviors (e.g., Allcott et al. Reference Allcott, Boxell, Conway, Gentzkow, Thaler and Yang2020).

5 I am not assuming that there is anything about conservatives that makes them more likely to believe misinformation on the basis of co-partisan testimony. The fact that conservatives are the focus of these recent prominent examples may well be a historical accident.

6 This evidence is reviewed below (§2).

7 I will maintain that typical misinformed partisan beliefs are irrational on both an objective and subjective understanding of rationality; consequently, I won't bother to draw the objective/subjective distinction except when it is particularly relevant.

8 Ahlstrom-Vij (Reference Ahlstrom-Vij and Battaly2019) and McCraw (Reference McCraw and Dormandy2020) discuss very closely related epistemic virtues.

10 Sperber et al. (Reference Sperber, Clément, Heintz, Mascaro, Mercier, Origgi and Wilson2010: §4) and Mercier (Reference Mercier2017: 105–08). For discussion, see Levy (Reference Levy2019).

15 Tappin and McKay (Reference Tappin and McKayn.d.).

16 See also, Merkley and Stecula (Reference Merkley and Stecula2018).

17 For further discussion of the psychological mechanisms underlying partisan beliefs, see Levy (Reference Levy2022: Chap. 3).

18 Theories of this sort are at least as old as Downs' (Reference Downs1957) “cue theory.”

20 Bakker et al. (Reference Bakker, Lelkes and Malka2020). See also, Tappin et al. (Reference Tappin, Pennycook and Rand2020: 82).

21 Tappin and McKay (Reference Tappin and McKayn.d.).

22 Bolsen and Palm (Reference Bolsen and Palm2022: §2).

25 Korecki and Owermohle (Reference Korecki and Owermohle2021).

27 Levin and Bradshaw (Reference Levin and Bradshaw2022). Pennycook et al. (Reference Pennycook, McPhetres, Bago and Rand2022) found that “cognitive sophistication” (which includes knowledge of science) protected both liberals and conservatives from COVID-19 misconceptions to at least some extent; but they also found that this protective effect diminished significantly as the issue became more politically polarized over time.

28 Meyer et al. (Reference Meyer, Alfano and de Bruinforthcoming) found that certain COVID-19 misconceptions were much more strongly associated with certain psychological traits than with political affiliation. However, because this study focused on particularly extreme or implausible misinformation believed by a small percentage of the population and because the relevant psychological traits are possessed by a similarly small percentage of the population, it doesn't have obvious implications for the widespread misinformed beliefs at issue here.

29 See, for example, Dunlap and McCright (Reference Dunlap and McCright2008), Merkley and Stecula (Reference Merkley and Stecula2018), and Tesler (Reference Tesler2018). An alternative theory is that conservatives believe the claims of climate change deniers because climate change denial is more compatible with their values and worldview. For criticism of this theory, see De Cruz (Reference De Cruz2020: §3.1), Greco (Reference Greco, Budolfson, McPherson and Plunkett2021: §§23), and Levy (Reference Levy2022: 31–35).

30 Pennycook and Rand (Reference Pennycook and Rand2021a).

32 See Kunda (Reference Kunda1990). It's worth noting that an individual might have multiple distinct aims that she balances against one another when forming or maintaining beliefs; the present debate assumes that we can abstract away from this complexity and focus on the aim that is predominant relative to a given belief.

33 See, for example, Flynn et al. (Reference Flynn, Nyhan and Reifler2017), Kahan (Reference Kahan2017), and De Cruz (Reference De Cruz2020).

34 For detailed discussion of this point, see Druckman and McGrath (Reference Druckman and McGrath2019) and Tappin et al. (Reference Tappin, Pennycook and Rand2020).

35 It's also important to note that the researchers found that reading this message did not have a significantly greater impact on individuals' beliefs when comparing individuals who were first exposed to the “group-identity threat prime” and individuals who were not exposed to any motivational prime (Bayes et al. Reference Bayes, Druckman, Goods and Molden2020: 1039–40). So, even if one were to grant that the “group-identity threat prime” effectively induced a partisan-identity-based motivation, this research does not provide evidence that individuals are motivated to affirm their partisan identities (rather than aiming at accuracy) when responding to messages concerning the beliefs of co-partisans regarding climate change.

36 Cassam (Reference Cassam2019: 122 & 132) and McCraw (Reference McCraw and Dormandy2020: 201).

37 Nguyen (Reference Nguyen2020: 154) charges the members of “epistemic bubbles” with this vice.

38 Pennycook and Rand (Reference Pennycook and Rand2021b).

39 One final proposal might be that misinformed partisan beliefs are typically formed via echo chambers with epistemically problematic features. For a response to this proposal, see Levy (Reference Levy2023).

40 It's worth emphasizing that Rini doesn't maintain that partisanship is a good guide to reliability with respect to all descriptive matters, but only “within specific domains” (Reference Rini2017: 50).

41 Levy appears to reject this claim; for instance, he suggests that “conservatives do not defer to scientists, or to their think-tank intermediaries or more local representatives, because while these sources exhibit cues of competence they fail to pass tests for benevolence” (Reference Levy2019: 322).

42 Such individuals will likely have encountered relevant testimony from conservative politicians and pundits; however, they don't have good reasons for believing that these politicians and pundits are competent with respect to determining who is spreading false information about complex scientific issues (for many of the reasons outlined above: see pp. 14–15).

43 See, for example, Levy (Reference Levy2022: chap. 5).

44 An earlier version of this material was presented to the Washington University in St. Louis Epistemology Group; my thanks to everyone who participated for their questions and suggestions. I am especially grateful to James Druckman, Neil Levy, Regina Rini, and an anonymous reviewer for Episteme, for their extremely helpful comments.

References

Ahlstrom-Vij, K. (2019). ‘The Epistemic Virtue of Deference.’ In Battaly, H. (ed.), The Routledge Handbook of Virtue Epistemology, pp. 209–20. New York: Routledge.Google Scholar
Allcott, H., Boxell, L., Conway, J., Gentzkow, M., Thaler, M. and Yang, D. (2020). ‘Polarization and Public Health: Partisan Differences in Social Distancing during the Coronavirus Pandemic.’ Journal of Public Economics 191, 104254.CrossRefGoogle Scholar
Bail, C., Argyle, L., Brown, T., Bumpus, J., Chen, H., Hunzaker, M.B., Lee, J., Mann, M., Merhout, F. and Volfovsky, A. (2018). ‘Exposure to Opposing Views on Social Media Can Increase Political Polarization.’ PNAS 115, 9216–21.CrossRefGoogle ScholarPubMed
Bakker, B., Lelkes, Y. and Malka, A. (2020). ‘Understanding Partisan Cue Receptivity: Tests of Predictions from the Bounded Rationality and Expressive Utility Perspectives.’ The Journal of Politics 82, 1061–77.CrossRefGoogle Scholar
Barber, M. and Pope, J. (2019). ‘Does Party Trump Ideology? Disentangling Party and Ideology in America.’ American Political Science Review 113, 3854.CrossRefGoogle Scholar
Bayes, R., Druckman, J., Goods, A. and Molden, D. (2020). ‘When and How Different Motives Can Drive Motivated Political Reasoning.’ Political Psychology 41, 1031–52.CrossRefGoogle Scholar
Bolsen, T., Druckman, J. and Cook, F.L. (2014). ‘The Influence of Partisan Motivated Reasoning on Public Opinion.’ Political Behavior 36, 235–62.CrossRefGoogle Scholar
Bolsen, T. and Palm, R. (2022). ‘Politicization and COVID-19 Vaccine Resistance in the U.S.’ Progress in Molecular Biology and Translational Science 188, 81100.CrossRefGoogle ScholarPubMed
Calvillo, D., Ross, B., Garcia, R., Smelter, T. and Rutchick, A. (2020). ‘Political Ideology Predicts Perceptions of the Threat of COVID-19 (and Susceptibility to Fake News About It).’ Social Psychological and Personality Science 11, 1119–28.CrossRefGoogle Scholar
Cassam, Q. (2019). Vices of the Mind. Oxford: Oxford University Press.CrossRefGoogle Scholar
Clayton, K., Davis, N., Nyhan, B., Porter, E., Ryan, T. and Wood, T. (2021). ‘Elite Rhetoric Can Undermine Democratic Norms.’ PNAS 118, e2024125118.CrossRefGoogle ScholarPubMed
Cuthbert, L. and Theodoridis, A. (2022). ‘Do Republicans Really Believe Trump Won the 2020 Election? Our Research Suggests That They Do.’ The Washington Post. https://www.washingtonpost.com/politics/2022/01/07/republicans-big-lie-trump/.Google Scholar
De Cruz, H. (2020). ‘Believing to Belong: Addressing the Novice-Expert Problem in Polarized Scientific Communication.’ Social Epistemology 34, 440–52.CrossRefGoogle Scholar
Downs, A. (1957). ‘An Economic Theory of Political Action in a Democracy.’ Journal of Political Economy 65, 135–50.CrossRefGoogle Scholar
Druckman, J. and McGrath, M. (2019). ‘The Evidence for Motivated Reasoning in Climate Change Preference Formation.’ Nature Climate Change 9, 111–19.CrossRefGoogle Scholar
Druckman, J., Peterson, E. and Slothuus, R. (2013). ‘How Elite Partisan Polarization Affects Public Opinion Formation.’ American Political Science Review 107, 5779.CrossRefGoogle Scholar
Dunlap, R. and McCright, A. (2008). ‘A Widening Gap: Republican and Democratic Views on Climate Change.’ Environment: Science and Policy for Sustainable Development 50, 2635.Google Scholar
Fahey, J. (2023). ‘The Big Lie: Expressive Responding and Misperceptions in the United States.’ Journal of Experimental Political Science 10, 267–78.CrossRefGoogle Scholar
Flynn, D.J., Nyhan, B. and Reifler, J. (2017). ‘The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics.’ Advances in Political Psychology 38, 127–50.CrossRefGoogle Scholar
Funk, C. and Hefferon, M. (2019). ‘U.S. Public Views on Climate and Energy.’ Pew Research Center. https://www.pewresearch.org/science/2019/11/25/u-s-public-views-on-climate-and-energy/.Google Scholar
Golos, A., Hopkins, D., Bhanot, S. and Buttenheim, A. (2022). ‘Partisanship, Messaging, and the COVID-19 Vaccine: Evidence From Survey Experiments.’ American Journal of Health Promotion 36, 602–11.CrossRefGoogle ScholarPubMed
Greco, D. (2021). ‘Climate Change and Cultural Cognition.’ In Budolfson, M., McPherson, T. and Plunkett, D. (eds), Philosophy and Climate Change, pp. 178–98. Oxford: Oxford University Press.CrossRefGoogle Scholar
Harris, P., Koenig, M., Corriveau, K. and Jaswal, V. (2018). ‘Cognitive Foundations of Learning from Testimony.’ Annual Review of Psychology 69, 251–73.CrossRefGoogle ScholarPubMed
Hart, P.S., Chinn, S. and Soroka, S. (2020). ‘Politicization and Polarization in COVID-19 News Coverage.’ Science Communication 42, 679–97.CrossRefGoogle Scholar
Kahan, D. (2017). ‘Misinformation and Identity-Protective Cognition.’ Yale Law & Economics Research Paper No. 587. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3046603.Google Scholar
Korecki, N. and Owermohle, S. (2021). ‘Attacks on Fauci Grow More Intense, Personal and Conspiratorial.’ Politico. https://www.politico.com/news/2021/06/04/fauci-attacks-personal-conspiratorial-491896.Google Scholar
Kunda, Z. (1990). ‘The Case for Motivated Reasoning.’ Psychological Bulletin 108, 480–98.CrossRefGoogle ScholarPubMed
Levin, J. and Bradshaw, M. (2022). ‘Determinants of COVID-19 Skepticism and SARS-CoV-2 Vaccine Hesitancy: Findings from a National Population Survey of U.S. Adults.’ BMC Public Health 22, 1047.CrossRefGoogle ScholarPubMed
Levy, N. (2019). ‘Due Deference to Denialism: Explaining Ordinary People's Rejection of Established Scientific Findings.’ Synthese 196, 313–27.CrossRefGoogle ScholarPubMed
Levy, N. (2022). Bad Beliefs: Why They Happen to Good People. Oxford: Oxford University Press.Google ScholarPubMed
Levy, N. (2023). ‘Echoes of Covid Misinformation.’ Philosophical Psychology 36, 931–48.CrossRefGoogle Scholar
McCraw, B. (2020). ‘Proper Epistemic Trust as a Responsibilist Virtue.’ In Dormandy, K. (ed.), Trust in Epistemology, pp. 189217. New York: Routledge.Google Scholar
Mercier, H. (2017). ‘How Gullible Are We? A Review of the Evidence from Psychology and Social Science.’ Review of General Psychology 21, 103–22.CrossRefGoogle Scholar
Merkley, E. and Loewen, P. (2021). ‘Anti-Intellectualism and the Mass Public's Response to the COVID-19 Pandemic.’ Nature Human Behaviour 5, 706–15.CrossRefGoogle ScholarPubMed
Merkley, E. and Stecula, D. (2018). ‘Party Elites or Manufactured Doubt? The Informational Context of Climate Change Polarization.’ Science Communication 40, 258–74.CrossRefGoogle Scholar
Meyer, M., Alfano, M. and de Bruin, B. (Forthcoming). ‘Epistemic Vice Predicts Acceptance of Covid-19 Misinformation.’ Episteme.Google Scholar
Motta, M., Stecula, D. and Farhart, C. (2020). ‘How Right-Leaning Media Coverage of COVID-19 Facilitated the Spread of Misinformation in the Early Stages of the Pandemic in the U.S.’ Canadian Journal of Political Science 53, 335–42.CrossRefGoogle Scholar
Nguyen, C.T. (2020). ‘Echo Chambers and Epistemic Bubbles.’ Episteme 17, 141–61.CrossRefGoogle Scholar
Pennycook, G., McPhetres, J., Bago, B. and Rand, D. (2022). ‘Beliefs About COVID-19 in Canada, the United Kingdom, and the United States: A Novel Test of Political Polarization and Motivated Reasoning.’ Personality and Social Psychology Bulletin 48, 750–65.CrossRefGoogle ScholarPubMed
Pennycook, G. and Rand, D. (2021 a). ‘Examining False Beliefs about Voter Fraud in the Wake of the 2020 Presidential Election.’ Harvard Kennedy School Misinformation Review 2, 119.Google Scholar
Pennycook, G. and Rand, D. (2021 b). ‘The Psychology of Fake News.’ Trends in Cognitive Sciences 25, 388402.CrossRefGoogle ScholarPubMed
Petersen, M., Skov, M., Serritzlew, S. and Ramsøy, T. (2013). ‘Motivated Reasoning and Political Parties: Evidence for Increased Processing in the Face of Party Cues.’ Political Behavior 35, 831–54.CrossRefGoogle Scholar
Rini, R. (2017). ‘Fake News and Partisan Epistemology.’ Kennedy Institute of Ethics Journal 27, E-43–E-64.CrossRefGoogle Scholar
Rothwell, J. and Desai, S. (2020). ‘How Misinformation Is Distorting COVID Policies and Behaviors.’ Brookings Institution. https://www.brookings.edu/research/how-misinformation-is-distorting-covid-policies-and-behaviors/.Google Scholar
Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G. and Wilson, D. (2010). ‘Epistemic Vigilance.’ Mind & Language 25, 359–93.CrossRefGoogle Scholar
Suhay, E., Tenenbaum, M. and Bartola, A. (2022). ‘Explanations for Inequality and Partisan Polarization in the U.S., 1980–2020.’ The Forum 20, 536.CrossRefGoogle Scholar
Tappin, B. and McKay, R. (n.d.). ‘Estimating the Causal Effects of Cognitive Effort and Policy Information on Party Cue Influence.’ PsyArXiv. https://psyarxiv.com/tdk3y/.Google Scholar
Tappin, B., Pennycook, G. and Rand, D. (2020). ‘Thinking Clearly About Causal Inferences of Politically Motivated Reasoning: Why Paradigmatic Study Designs Often Undermine Causal Inference.’ Current Opinion in Behavioral Sciences 34, 8187.CrossRefGoogle Scholar
Tesler, M. (2018). ‘Elite Domination of Public Doubts About Climate Change (Not Evolution).’ Political Communication 35, 306–26.CrossRefGoogle Scholar
Worsnip, A. (2019). ‘The Obligation to Diversify One's Sources: Against Epistemic Partisanship in the Consumption of News Media.’ In Fox, C. and Saunders, J. (eds), Media Ethics, Free Speech, and the Requirements of Democracy, pp. 240–64. New York: Routledge.CrossRefGoogle Scholar