The Irrationality of Pluralistic Ignorance [penultimate draft-please cite published version] Abstract: Pluralistic ignorance is a social-psychological phenomenon in which an agent believes that their attitudes, feelings, and beliefs are different from those of others, despite the fact that their public behavior is identical. I argue that agents in standard cases of pluralistic ignorance are epistemically irrational. I accomplish this, first, by rebutting a recent argument for the rationality of pluralistic ignorance. Next, I offer a defeat-based argument against the epistemic rationality of pluralistic ignorance. Third, I examine a type of case in which the pluralistically-ignorant agent's belief is irrational, despite the fact that this belief lacks a defeater. Finally, I consider instances of pluralistically-ignorant agents whose beliefs are not irrational, but explain why such cases are not problematic for my main thesis. This critical discussion allows me to offer an important amendment to an extant account of pluralistic ignorance. Pluralistic ignorance has received much attention in recent years in formal and social epistemology. Roughly, pluralistic ignorance is a social-psychological phenomenon in which an agent believes that their attitudes, feelings, and beliefs are different from those of others, despite the fact that their public behavior is identical.1 Bjerring et al. (2014) argue that agents in situations of pluralistic ignorance can be and often are epistemically rational. In this paper I argue that agents in standard cases of pluralistic ignorance are epistemically irrational. In order to show this, I first explicate the account of pluralistic ignorance that Bjerring et al. offer. Next, I respond to their argument for the rationality of pluralistically-ignorant agents. Third, after 1 See Bicchieri (2006, pp. 186-188). 2 arguing that their account of pluralistic ignorance neglects a crucial feature, I offer a defeatbased argument against the epistemic rationality of pluralistic ignorance that draws on this feature. Next, I examine an objection derived from the work of Miller and McFarland (1987, 1991) that claims that pluralistically-ignorant agents do not hold a defeated belief. Finally, I respond to an objection that claims that pluralistically-ignorant agents are not necessarily irrational. 1 Bjerring et al. on Pluralistic Ignorance To get a better grasp of the phenomenon, it will help to start with some examples. Drawing from examples in the literature on pluralistic ignorance, Bjerring et al. (2014, p. 2448) present the following paradigmatic cases: Classroom Case A teacher has just finished presenting some difficult material in class and asks the students whether they have any questions. Although each student does not fully understand the material, no one asks a question. Based on the observation that no student in the class asks a question, each student believes that everyone but him believes that [ P] the material was not difficult. To avoid being publicly displayed as the only one who did not understand the material, no student dares ask a question. College Drinking Case A group of freshmen students have just arrived at their new dorm. At the inauguration party, each student drinks excessively, although each student in fact believes that [P] drinking is not enjoyable. Upon observing the excessive drinking of others, however, each student forms the belief that everyone but him believes that [ P] drinking is 3 enjoyable. To avoid being publicly displayed as the boring one, every student continues to drink excessively at the party. Emperor's Case In Hans Christian Andersen's fable "The Emperor's New Clothes" (1837), we meet two impostors who sell imaginary clothes to an emperor. They claim that those who cannot see the clothes are either not fit for their office or just truly stupid. In fear of appearing unfit for his office and truly stupid, the emperor-as well as everyone else-pretends to be able to see the garment. Yet, everyone believes that [P] the emperor is in fact naked. Based on the observation that everyone acts as if the emperor is dressed, however, each person forms the belief that everyone but him believes that [ P] the emperor is dressed. To avoid being publicly labelled as someone who is unfit for his office or truly stupid, everyone pretends that the emperor is dressed-except for the little boy who after a while cries out: "but the emperor has nothing on at all!" With these sorts of examples in mind we can provide a more formal characterization of pluralistic ignorance. Perhaps the most formal and detailed characterization of the nature of pluralistic ignorance comes from Bjerring et al. (2014). Consequently, their account can serve as a nice starting point for discussing the epistemic rationality of pluralistic ignorance. In light of the above cases, Bjerring et al. take pluralistic ignorance to characterize social situations in which "group[s] of individuals all have the same attitude toward some proposition or norm, all act contrary to this attitude, and all wrongly believe that everyone else in the group has a certain conflicting attitude to the proposition or norm" (Bjerring et al. 2014, p. 2446). More specifically, after examining three other accounts and finding them wanting, they propose (Bjerring et al. 2014, p. 2558): 4 (PI) "Pluralistic ignorance" refers to a situation where the individual members of a group (i) all believe some proposition P; (ii) all believe that everyone else believes  P; (iii) all act as if they believe  P; and where (iv) all take the actions of the others as strong evidence for their belief that the latter believe  P.2 My question is whether the belief in (ii) is epistemically rational.3 Bjerring et al. make the case that it can be and often is epistemically rational for agents to conform to condition (ii).4 In the following section I'll consider their argument for this claim. 2 Bjerring et al.'s Defense of the Epistemic Rationality of Pluralistic Ignorance Why might one think that the belief in (ii) is epistemically rational? In attempting to establish the possibility that pluralistically-ignorant agents are epistemically rational, Bjerring et al. (2014, p. 2463) consider the following question: "[W]hich epistemic factors can help explain why an agent ignores-or at least assigns a very low credence to-the possibility...that other agents in the social group, like him, do not reflect their private beliefs in their public behavior?" They take it that agents in situations of pluralistic ignorance have two main items of evidence concerning what others in the group believe. First, agents have observational evidence: they are observing others act as if they believe  P (e.g. that drinking is enjoyable). Second, agents know from 2 For the sake of clarity, (PI) is a slightly amended version of their (PI4). 3 Like Bjerring et al., I'll take "epistemically rational" to be synonymous with "epistemically justified." 4 They also argue that agents in standard cases of pluralistic ignorance may be pragmatically and all-thingsconsidered rational. 5 simple reflection that they themselves are acting contrary to their belief that P (e.g. that drinking is not enjoyable) and that it's possible that other people are doing the same. Bjerring et al. describe the latter as introspective evidence. So in order to determine whether the agent's belief that others believe that  P is epistemically rational, we need to ask "whether the agent's observational evidence outweighs his introspective evidence. If it does, we have an explanation of why the agent...has epistemic reasons to ignore the possibility that the actions of others do not truly reflect what they believe" (Bjerring et al. 2014, p. 2464). Bjerring et al. hold that there are cases where an agent's observational evidence outweighs her introspective evidence. They say, "In most cases of pluralistic ignorance, it seems, an agent has indeed good epistemic reasons to give more weight to his observational evidence than to his introspective evidence" (Bjerring et al. 2014, p. 2464). They think the College Drinking Case is just such an example (Bjerring et al. 2014, p. 2464): In standard cases such as the College Drinking Case, agents lack any observational evidence for seriously doubting that the group's behavior does not reflect what each member in the group in fact believes. In the College Drinking Case, there is no striking conflict between the agent's belief that drinking is not enjoyable, his observations of the group's behavior, and his higher-order belief that everyone but him finds drinking enjoyable. Rather, the agent's observations of the excessive drinking in the group makes it epistemically rational for him to maintain the higher-order belief in question. If so, it is sensible to hold that the agent's observational evidence outweighs his introspective evidence in these sorts of cases...5 5 Bjerring et al. allow that examples such as the Emperor's Case will likely make agents who continue to be pluralistically-ignorant epistemically irrational. The reason is that, in addition to their introspective evidence that 6 This argument fails, though, because it does not acknowledge that some evidence is capable of epistemically undercutting other evidence or beliefs. When an item of evidence functions in this way, it acts as an undercutting defeater, a "reason[ ] to question whether your evidence or reasons or grounds for a belief actually indicate that the belief is true" (Bergmann 2006, p. 159). Bjerring et al. appear not to countenance undercutting defeaters. However, we can see why undercutting defeaters should be acknowledged by contrasting two examples, one in which two competing sets of evidence are on a par, and another in which one set of evidence defeats a competing set. For the first case, suppose that, Lisa, one of my colleagues, tells me that it's raining out. Another, Aldo, tells me that it's not. In coming to a conclusion about whether it's raining on the basis of this information, epistemic propriety might require that I simply weigh the testimony of Lisa against that of Aldo. But consider another case, one in which I visit a furniture store, seem to see a red table in front of me, and, on the basis of that visual experience, form the belief that there is a red table before me. A minute later I'm told by a sales clerk that there is actually a red light shining on the table that I'm looking at. Unlike the first case, this case involves defeating evidence: the testimony of the sales clerk is an undercutting defeater of my observation-based belief that the table in front of me is red. Consequently, upon receiving this testimony, I can no longer rationally believe that the table is red; the new evidence disqualifies the old, rendering it epistemically impotent, unlike in the previous case. As I'll argue below, they act as if they believe  P, though they believe P, and that others may well be doing the same, they have observational evidence that the emperor is naked. This, combined with the thought that other people's perceptual faculties are working properly, gives the agent a strong reason to deny that everyone else believes that the emperor is clothed. Such an agent cannot be epistemically rational in continuing to be pluralistically-ignorant. 7 many cases of pluralistic ignorance involve agents who have an irrational belief due to undercutting defeater. 3 A Defeat-Based Argument Against the Rationality of Pluralistic Ignorance My argument against the epistemic rationality of agents in situations of pluralistic ignorance draws on a feature of these situations not mentioned in (PI). The latter account is importantly incomplete in that it does not recognize that, as in the examples from Bjerring et al. and those in the wider literature on pluralistic ignorance, non-conformity on the part of pluralisticallyignorant agents is potentially costly. That is, agents in situations of pluralistic ignorance act as if they believe  P because there is a perceived risk associated with acting on their actual belief that P. This perceived risk varies from one case to another, but it concerns the potential social cost of actions that reflect the agent's frame of mind. The classic cases of pluralistic ignorance discussed in Prentice and Miller (1993), Miller and McFarland (1987, 1991), Kauffman (1988), Matza (1964), Schanck (1932), Katz and Allport (1931), and Andersen (2000) all have this feature.6 Consider a couple examples. Bjerring et al.'s College Drinking Case involves students who act as if they like drinking "to avoid being publicly displayed as the boring one" (2014, p. 2448). Kauffman (1988, p. 246-248) finds that many prison officers occasionally have sympathetic attitudes toward inmates, but disguise these attitudes by adopting a cold and indifferent façade on the job. The reason they adopt this façade is, in part, that they fear rejection from their fellow officers if they show sympathy toward inmates. Another reason that (PI) is inadequate concerns the fact that pluralistic ignorance is commonly recognized as an explanation for the existence and persistence of unpopular social norms. Unlike (PI), an account of pluralistic 6 Note that while Andersen (2000) doesn't describe his story as a case of pluralistic ignorance, it is taken as a standard example by, e.g., Bicchieri (2006) and Bjerring et al. (2014). 8 ignorance that acknowledges that agents face potential social costs when deciding whether to conceal their belief that P is able to do this explanatory work. Thus, the following revised version of (PI) is a more accurate account of pluralistic ignorance: (PIʹ) "Pluralistic ignorance" refers to a situation where the individual members of a group (i) all believe some proposition P; (ii) all believe that everyone else believes  P; (iii) all act as if they believe  P because of a perceived potential social cost; and where (iv) all take the actions of the others as strong evidence for their belief that the latter believe  P. The following example, which is similar to the Emperor's Case, provides a dramatic illustration of a situation in which the perceived risk of acting on one's beliefs is especially elevated: Dictator Case Dictator has a cabinet of 30 advisors. Dictator has selected his advisors for the purpose of providing input in various matters concerning the operations of the state. He is known to treat advisors with whom he disagrees with great cruelty, sentencing some of them to death. Recently, Dictator has aired a policy idea to his advisors. Advisor A believes that [P] Dictator's policy is unsound, but is quick to voice support for it in meetings, much like the rest of A's fellow advisors. Based on their outward behavior and positive statements about Dictator's policy, Advisor A believes that everyone but her thinks that [ P] the policy is sound. 9 Is Advisor A's belief that everyone but her agrees with Dictator epistemically rational? Recall that she formed this belief by observing the pro-policy behavior of her fellow advisors. Given this fact, it seems that the rationality of her belief is undermined by an undercutting defeater. After all, she believes B: Everyone else believes that [ P] Dictator's policy is sound. But she has a reason to think that the ground of this belief (observations of other people's behavior) is not a reliable indicator of truth under the circumstances. That is, she has good reason to believe D: My fellow advisors would act as if they believe that  P whether or not they actually do. Advisor A's support for D is strong because, she realizes, if the other advisors are anything like her, they wish to avoid the high risks of speaking out. While under normal circumstances Advisor A would rationally take people acting as if  P to be evidence that they in fact believe  P, she cannot rationally do so here. The reason is that she is well aware of the fact that an advisor who believes that P would have a very good reason to misrepresent what they believe in order to save their neck. Note that this diagnosis is not mere speculation on her part, for all she needs to do is attribute to others the kind of practical reasoning she herself performed. Assuming that Advisor A has no reason to doubt that other advisors are rational and care about their well-being, her belief that B is epistemically irrational. Many instances of pluralistic ignorance conform to this characterization of the Dictator Case: the risk of acting in accordance with one's belief that P is high enough to warrant the belief that others reasoned in the same way and decided to act as if they believe that  P. However, not 10 all instances of pluralistic ignorance are such. Take the Classroom Case. The student believes that B2: Everyone else believes that [ P] the material was not difficult. Unlike in the Dictator Case, it seems the student doesn't have a compelling reason for thinking that the ground of this belief (observations of other people's behavior) is not a reliable indicator of truth under the circumstances. That is, he doesn't have a good reason to believe D2: My fellow classmates would act as if they believe that  P whether or not they actually do. Again, contrasting this case with the Dictator Case, Advisor A (like most of us) has very good reason to believe that very few people would be willing to risk their lives over a simple public policy disagreement. Thus, her evidence for D is strong. In contrast, the student (like most of us) does not have good reason to believe that his classmates would be unwilling to accept the possible social costs associated with displaying their ignorance. For all the student knows, his classmates are willing to tolerate the possible social costs, such as embarrassment or disapproval for interrupting the lecture, if it means advancing their own learning. Thus, his evidence for D2 is fairly weak. If so, his belief that B2, if it is epistemically irrational at all, is not so for the same reason that Advisor A's belief that B is irrational. Nonetheless, I think the student, like Advisor A, ought to withhold judgement regarding his belief that B2. In other words, both B and B2 are irrational to believe. The reason B is irrational for Advisor A is that she has an undercutting defeater for B. The reason that B2 is irrational for the student, however, is not that he, like Advisor A, has a reason to think that his evidence for B2 is misleading. Rather, the reason is that he cannot rule out the non-remote possibility that his evidence for B2 is misleading. Consider the student's evidence for B2. On the 11 one hand, he observes his classmates acting as if  P. On the other, he knows that he himself is acting contrary to what he believes. This latter, introspective evidence raises the non-remote possibility that the student's evidence for B2 (his observations of others' behavior) is misleading. After all, the student knows that his own behavior is misleading in this scenario. Assuming he has no reason to think he is unique in this regard (we'll examine this assumption further in the next section), it is an open question whether or not the behavior of his peers regarding P is misleading or not. If there is a non-remote possibility that his evidence for B2 is misleading, then the student should suspend judgement regarding B2. Consider a version of the example offered in the previous section. Suppose that, after having formed a belief that the table before me is red (on the basis of my visual experience) and before I encounter the sales clerk, I read the price tag attached to the table, which says at the bottom, "Note that this table may not be colored as it appears. This store occasionally switches to non-ordinary lighting colors throughout the week." Assuming there is no immediate way for me to tell whether the lighting conditions in the store are ordinary or not, I should withhold belief regarding the color of the table. And this is so even though I don't have enough evidence to think that my visual experience as of a red table is misleading, just that it's a non-remote possibility that it's misleading. Things are similar in the Classroom Case. Given that it's a non-remote possibility that the behavior of his peers regarding P is misleading, the student ought to refrain from believing (on the basis of his observations of his peers' behavior) that his peers believe that  P. The rational thing for him to do, like Advisor A, is to suspend judgement regarding B2. These considerations generalize to other standard cases of pluralistic ignorance. In some of these, such as the Dictator Case, the agent has a reason to think that her observational 12 evidence concerning what others believe is misleading. In these cases, agents have an undercutting defeater for their belief that others believe that  P (mentioned in (ii)). Other cases are like the Classroom Case: while agents don't have a reason to think that their observational evidence is misleading, their belief that others believe that  P is still irrational because there is a non-remote possibility that their observational evidence is misleading. This non-remote possibility is present because the agent knows that her own behavior regarding P is misleading. Unless she has a reason to think that she is unique in this regard, reflection on her own case raises the non-remote possibility that the behavior of her peers is similarly misleading. But if so, then her belief that everyone else believes that  P is irrational. So, pluralistically-ignorant agents are not rational in believing (as they do in (ii)) that everyone else but them believes  P. So, I am arguing that the rational doxastic attitude for an agent in a situation of pluralistic ignorance to take is that of withholding. But it might be objected that pluralistic ignorance, intuitively understood, could still obtain if agents adopted that attitude. That is, we could still have a case of pluralistic ignorance on our hands even if conditions (ii) and (iv) of (PIʹ) were not met. For example, the student in the Classroom Case believes the lecture was difficult and, due to the threat of potential social costs, acts as if he thought it wasn't difficult. Would the situation be much different if we simply added that, on reflection, the student did not take the actions of others as strong evidence for what they believe about the lecture and, thus, refrained from believing that his classmates thought it difficult? After all, the student might still act in a way that conceals what he actually thinks about the lecture because he is unsure if others will share his assessment (even if he doesn't form the belief that they won't). However, while this is an interesting scenario and may be worth further study, it should not be classified as a case of pluralistic ignorance. The idea that subjects believe that others 13 disagree with them (not just believe that they might disagree) is indispensable to the concept of pluralistic ignorance, as it is generally understood. Halbesleben and Buckley (2004, p. 126), in their examination of the history of the study of the phenomenon, understand pluralistic ignorance as a "social comparison error where an individual holds an opinion, but mistakenly believes that others hold the opposite opinion." Other general characterizations in the literature also mention agents holding a mistaken view or having a misperception about what other agents believe. Interest in studying what is now called pluralistic ignorance grew out of Allport's (1924) work on the illusion of universality of opinions, "the tendency of individuals to believe that opinions are universally held by members of a social group" (Halbesleben and Buckley 2004, p. 128). An early detailed treatment of pluralistic ignorance is that of Katz and Allport (1931), which found that a majority of students in their study believed that racial minorities should be admitted to fraternities, but (mistakenly) believed that others would not agree (Halbesleben and Buckley 2004, p. 128). Schanck (1932) explored the religious and ethical views of residents in a small community with a large Methodist presence and found that residents tended to think, with respect to a number of issues, that the others residents held more conservative views than they did. Discussions of pluralistic ignorance, from the start, have been concerned to study a believed self/other divergence in opinion. Cases where agents withhold, by contrast, don't involve any doxastic commitment on the part of the agent concerning whether her views differ from those of others in the group. For this reason, they should be classified differently. Another reason not to treat cases of withholding as cases of pluralistic ignorance is that the two phenomena likely have distinct consequences. For example, in cases involving alcohol consumption among college students, Prentice and Miller (1993) found that subjects who mistakenly believed their peers to be more comfortable with drinking than themselves 1) felt 14 alienated as a result of thinking their views diverged from the norm (both males and females); and 2) felt pressure to change their views over time to align with what they took their peers to believe about drinking (males). While I know of no literature that has directly studied what might result if a student simply withholds judgment about what their peers believe, it seems that in general these two consequences would be significantly less likely to result. For example, the work of Schroeder and Prentice (1998) points in this direction. The former examined the effects on subsequent drinking behavior of educating incoming college freshmen about pluralistic ignorance. The students who participated in the study were divided into two groups, one which was informed via group discussion that they may be overestimating how comfortable their peers are with drinking, another which engaged in non-peer individualistic discussion focused on decision-making in a drinking situation. Schroeder and Prentice found that the first group of students reported drinking significantly less than the second. In theorizing about how, exactly, the first group of students may have been led to drink less, Schroeder and Prentice suggest that in drinking situations the former adopted a skeptical attitude with regard to whether their peers' drinking behavior revealed what the latter actually believed about drinking: When they saw their peers looking relaxed with, and even amused by, excessive alcohol consumption, they knew enough to discount their perceptions. They knew that public acquiescence did not necessarily signal private acceptance. Thus, from the outset, these students probably experienced little social pressure to conform to local drinking practices (p. 1273). Thus, it appears that, from how pluralistic ignorance has historically been understood in the literature and the distinct consequences that result from believing one's peers' views diverge 15 from one's own, cases of withholding judgement about the attitudes of others in the group should not be treated as cases of pluralistic ignorance. 4 A Defeater-Defeater? One might object to my analysis by claiming that agents in situations of pluralistic ignorance have reason to think that they are not like their peers in relevant respects. So, the fact that they are misrepresenting themselves gives them no reason at all to think that others might be doing the same. Put differently, my objector might concede that we typically have no reason to doubt that we are like other members of our peer group in relevant respects, but insist that subjects in states of pluralistic ignorance do have a reason to doubt that they are like everyone else in relevant respects. That is, my objector might claim, they are typically in possession of a defeaterdefeater, a reason that removes (defeats) the rationality-defeating power of the original defeater. To illustrate, let's return to the furniture store example. Suppose that just as the sales clerk is finished telling me that a red light is shining on the table I'm looking at, a group of her coworkers walks over to the conversation. While they are all chuckling, one of them speaks up: "She's pulling your leg. She's been saying that to all of the customers who've been looking at that table." This new testimonial evidence serves to defeat the rationality-defeating power of the original sales clerk's testimony. The result is that my belief that the table before me is red, formed on the basis of its appearing to be red, is now as rational as it was before I heard the sales clerk's testimony. One might propose that such a situation obtains in standard cases of pluralistic ignorance. If so, then pluralistically-ignorant subjects' beliefs about what others believe can be rational after all. What might this defeater-defeater be? One might claim that the agent has a reason to think she is unique in relation to her peers. Miller and McFarland (1987, 1991) provide a fairly 16 detailed, experimentally supported, account of the cognition of agents in situations of pluralistic ignorance. One of their conclusions is that: ...people believe that they possess a greater degree of traits that lead to social inhibition than does the average other. We proposed that it is people's belief that they are generally more bashful, hesitant, self-conscious, and so on than the average other that leads them to infer the situationally specific differences between self and others that constitute pluralistic ignorance (Miller and McFarland 1987, p. 300). Relatedly, Miller and McFarland (1991, p. 298) say that people generally think they are more fearful of embarrassment than the average other. For the sake of specificity, let's take the subject's uniqueness belief to be the following: FE: Fear of embarrassment is a more potent determiner of my behavior than the behavior of others. Contrary to my proposed analysis, pluralistically-ignorant agents see themselves as unique in this way and so don't take the fact that they are misrepresenting themselves to make it any more likely that others might do the same. On this view, they would properly take the fact that other people act as if they believe  P to be strong evidence that they do in fact believe  P. Thus, FE serves to reinstate the full evidential force of the observed behavior of others by casting doubt on propositions like D and on the idea that it is a non-remote possibility that others' behavior is misleading. But if so, then pluralistically-ignorant agents can be epistemically rational in believing propositions like B and B2 in light of their observational evidence. As Miller and McFarland (1991, p. 298) say, "If people believe that they possess a greater degree of a particular trait than does the average other, it seems reasonable for them also to expect that their behavior in situations that engage that trait would be different from that of the average other." 17 Given that subjects believe FE, it is reasonable that they would take the behavior of others at ordinary face value and believe that others believe  P. This objection is unsatisfactory, however, for we can ask about the rationality of FE. It seems that Miller and McFarland are inclined to say that, given that a subject holds FE, it's reasonable for him or her to take the observed behavior of others (acting as if  P) at ordinary face value and believe something like EB: Everyone else but me believes  P. The agent would reason that if others believed P, then that belief would be reflected in their behavior, given that nothing like FE applies to them. It's epistemically reasonable to infer EB from FE and the observed behavior of others. However, our question is whether FE is believed rationally in the first place. If it is not, then subjects are epistemically irrational in believing EB.7 Miller and McFarland seem to think that FE can be rationally believed. They remark that "people have access to more cues pertaining to the presence of internal traits in self than in others." Or, put slightly differently, "...individuals have more data relevant to the existence of internal traits in self than in others" (Miller and McFarland 1987, p. 301). In an important sense, then, agents' belief in FE is based on the evidence they have. But Miller and McFarland's remarks do not show that the belief that FE is rational. While the agent's evidence about their internal traits makes rational their belief that their own behavior is in many cases influenced by fear, it does not rationally allow them to form any views regarding the extent that fear influences the behavior of others. It certainly does not allow them 7 It's a fairly uncontroversial constraint on inferentially-justified belief that if one's belief that P is to be justified on the basis of an inference from Q, then Q needs to be justified. Both "inferential internalists" and "inferential externalists" agree on this much. 18 to rule out the possibility that others' behavior is also in many cases influenced by fear. At most, this internal, introspective evidence supports something like FE*: Fear of embarrassment is a potent determiner of my own behavior. Thus, the fact that agents' belief in EB is (typically) based in part on an irrational belief in FE implies that their belief in EB is not rational. While the work of Miller and McFarland might be good as a descriptive account of the psychology of pluralistically-ignorant agents, it is not adequate as a normative account. Their work helps us to see why agents in situations of pluralistic ignorance form the beliefs they do about what others believe. But it does not vindicate the epistemic rationality of pluralisticallyignorant agents. Rather, their work helps us to locate the source of epistemic irrationality in such agents. Instead of showing that agents in situations of pluralistic ignorance have a defeater for their defeater of the belief that everyone else believes  P, Miller and McFarland's work lends credence to the idea that typical agents are epistemically irrational. This is so whether the focus is on agents' belief in FE itself or on their inference to EB on the basis of (in part) FE. One might object that subjects who form the belief that FE, whether or not as a result of cognitive biases, are more likely to act in a rational way. If they fear the potential consequences of letting others know they believe P, and if they use FE to infer EB, it seems to make sense for them to refrain from acting on their actual belief that P. In the College Drinking Case, students who believe FE will infer EB and, due to their fear of being considered boring by their peers, rationally refrain from displaying their view that drinking is not enjoyable. In other words, believing FE ultimately leads to prudentially rational action.8 Further, one might argue that it's 8 For an argument that agents in situations of pluralistic ignorance can be prudentially rational to conceal their belief that P, see Bjerring et al. (2014, p. 2463). 19 appropriate for subjects in situations of pluralistic ignorance to belief that FE because doing so confers a benefit to the group by increasing social integration. Social integration might occur because agents who believe FE (and use it to infer EB) will think their views sharply diverge from those of their peers and, thus, will be less likely to spread (what they think is) their deviant view to other members of the group.9 In the College Drinking Case, students who believe FE will infer EB and, thus, think that nobody else shares their view that drinking is not enjoyable. Because of their fear of being considered boring, they will not spread what they take to be a deviant view to other members the group. As a result, the current uniform drinking norm of the group will remain intact. While I do not deny that there may be cases in which agents' forming FE would lead to prudentially rational action or that agents' forming FE could lead to increased social integration for the group, the focus of this paper has been epistemic rationality, rather than prudential rationality. By "epistemic rationality" I have in mind the sort of rationality that one's belief has when it is supported by (and formed on the basis of) one's evidence. The fact that the belief that FE may lead to prudentially rational action, does not entail that that belief is epistemically rational. There are many situations in which it's in an agent's self-interest to believe something that is not supported by her evidence. Likewise, the fact that a widely shared belief increases social integration does not imply that the belief is supported by evidence. Further, even if the social integration resulting from agents' believing FE is so beneficial as to be evolutionarily advantageous, it still does not mean that FE is supported by agents' evidence. Some beliefs that lead to evolutionary advantages might well be adopted for reasons entirely unrelated to their 9 For a similar idea, see Noelle-Neumann (1974, p. 43). 20 truth-value or epistemic support. 10 But then it's difficult to maintain the idea that agents' belief that FE must be epistemically rational if it benefits the group. 5 Is Pluralistic Ignorance Always Epistemically Irrational? One might object that I haven't shown pluralistic ignorance to be an epistemically irrational phenomenon because I haven't shown that all pluralistically-ignorant agents are epistemically irrational. Rather, what I've shown is merely that standard instances of pluralistic ignorance involve epistemically irrational agents. By a "standard instance of pluralistic ignorance," I mean a case that conforms to (PIʹ) and in which the agent's epistemic position regarding other participants' beliefs is comparable to that of the agents involved in cases commonly discussed in the literature on pluralistic ignorance. Now, not all cases of pluralistic ignorance are such. Consider a case just like the Classroom Case, except that the instructor starts the lecture by stating that the upcoming lecture topic should be easy to understand and that in the last several years, every previous student who has heard it had no trouble grasping it. The instructor's statement is false, but seems sincere and accurate. In this case (call it the Deceitful Instructor Case), it appears that the student believes rationally that the other students believe the material was not hard. Or consider a case in which pluralistically-ignorant agents have evidence for FE. Suppose each individual's therapist has informed them that they are more fearful than the average person. In this case (call it the Informative Therapist Case), it would be rational for these agents to take the behavior of others at face value and believe that they believe  P; their own misleading behavior wouldn't give them a reason to be suspicious about whether others were doing the same. 10 See, e.g., Churchland (1987, p. 548-549). 21 While it seems reasonable to allow that the above cases are both genuine cases of pluralistic ignorance and that agents in such cases may be rational in believing that their peers believe  P, such cases differ from what I call standard cases. The cases in the wider literature on pluralistic ignorance exclude a significant feature that is present in the current examples. That is, standard cases of pluralistic ignorance are such that the only evidence the subject is using to form beliefs about what others in the group believe is what the subject can gather on introspection and the observation of others' behavior, perhaps along with some general folk psychological assumptions (e.g. that people's behavior regarding P generally reflects their attitude regarding P). But in the Deceitful Instructor Case and the Informative Therapist Case there is another source of evidence: the testimony of the professor and therapist, respectively. Thus, such cases should not be thought of as standard ones; they are not in the spirit of cases of pluralistic ignorance discussed in the literature. Now, I am not in a position to specify in a nontrivial way what demarcates standard cases from non-standard ones, mainly because it does not seem possible to formulate a precise criterion for what counts as admissible background knowledge in standard cases of pluralistic ignorance. General beliefs about the kind of thing people fear are allowed, but specific beliefs about how fearful one is relative to others are not. Beliefs about what the teacher has said are acceptable, except when these beliefs concern a misleading statement the teacher made about the difficulty of the material. The task of specifying what makes a piece of evidence admissible in a standard case of pluralistic ignorance seems hopeless. However, standard cases do appear to differ importantly from non-standard ones. In the Deceitful Instructor Case and Informative Therapist Case, the agent's belief that others believe  P is clearly rational, and the rationality of this belief has a straightforward explanation: the agent 22 possesses specific evidence warranting her belief. For this reason, the two non-standard cases do not raise the kind of puzzle that standard cases generate. Let me elaborate on this point a little further. Pluralistic ignorance is commonly treated among those who study it as an undesirable state of affairs, one that (other things equal) ought to be dissolved.11 The reason it is treated as such is that it often has bad consequences. For example, standard cases often perpetuate unpopular social norms. The College Drinking Case illustrates this. While most students prefer not to drink, most end up doing so due to the potential social costs of refraining. And this drinking behavior further supports the impression that most students prefer drinking, which continues to impel students to act contrary to their preferences and engage in behavior that may be harmful. By better understanding these cases we might hope to understand more about how to keep them from arising and/or dissolve them, thus preventing these bad consequences. My arguments against the epistemic rationality of pluralistic ignorance, if successful, contribute in a modest way to our understanding of the phenomenon by pointing out that pluralistically-ignorant agents epistemically err in a particular way. Given that they increase our understanding in this way, perhaps they further suggest that efforts to prevent or dissolve situations of pluralistic ignorance should address the cognitive biases of those involved. At any rate, since my discussion applies to the vast majority of actual cases, very little seems to be lost when it comes to solving the real-world problems associated with pluralistic ignorance; a discussion that covered all cases (both actual and possible) wouldn't amount to a significant improvement in this regard. 6 Conclusion 11 See, e.g., Prentice and Miller (1993, p. 254) and Bicchieri (2006, pp. 193-196). 23 In the course of arguing that agents in standard cases of pluralistic ignorance are epistemically irrational I argued that (PI) neglects an important feature of situations of pluralistic ignorance. That is, it ignores the fact that agents in such situations believe that there is a potential social cost to acting on their belief that P. I thus proposed (PIʹ), which in turn served as the basis of my defeat-based argument against the epistemic rationality of pluralistically-ignorant agents. Miller and McFarland's work, rather than casting doubt on my contention, helped us to locate the source of irrationality. My view is not that standard cases of pluralistic ignorance are irrational by definition. Rather, I define "standard cases" ostensibly by pointing to the extant literature on pluralistic ignorance, which happens to include cases that involve epistemically irrational agents. References Allport, F. H. 1924. Social Psychology. Boston: Houghton-Mifflin. Andersen, H. C. 2000. The Emperor's New Suit (1837). Zurich: North-South Books. Bergmann, M. 2006. Justification without Awareness: A Defense of Epistemic Externalism. New York: Oxford University Press. Bicchieri, C. 2006. The Grammar of Society: The Nature and Dynamics of Social Norms. Cambridge: Cambridge University Press. Bjerring, J. C., Hansen, J. U., and Pedersen, N. J. L. L. 2014. 'On the Rationality of Pluralistic Ignorance.' Synthese, 191 (11): 2445-2470. Churchland, P. S. 1987. 'Epistemology in the Age of Neuroscience.' Journal of Philosophy, 84 (10): 544-553. 24 Halbesleben, J. R. B. and Buckley, M. R. 2004. 'Pluralistic Ignorance: Historical Development and Organizational Applications.' Management Decision, 42: 126-138. Katz, D. and Allport, F. H. 1931. Student Attitudes. Syracuse, NY: The Craftsman Press. Kauffman, K. 1988. Prison Officers and Their World. Cambridge, MA: Harvard University Press. Matza, D. 1964. Delinquency and Drift. New York: Wiley. Miller, D. and McFarland, C. 1987. 'Pluralistic Ignorance: When Similarity is Interpreted as Dissimilarity.' Journal of Personality and Social Psychology, 5 (2): 298-305. -- 1991. 'When Social Comparison Goes Awry: The Case of Pluralistic Ignorance.' In J. Suls and T. Wills (eds), Social Comparison: Contemporary Theory and Research, pp. 287313. Hillsdale, NJ: Erlbaum. Noelle-Neumann, E. 1974. 'The Spiral of Silence: A Theory of Public Opinion.' Journal of Communication, 24 (2): 43-51. Prentice, D. and Miller, D. 1993. 'Pluralistic Ignorance and Alcohol Use on Campus: Some Consequences of Misperceiving the Social Norm.' Journal of Personality and Social Psychology, 64 (2): 243-256. Schanck, R. L. 1932. 'A Study of a Community and Its Groups and Institutions Conceived of as Behaviors of Individuals.' Psychological Monographs, 43 (2): i-133.