On appeals to intuition: a reply to Muñoz-Suárez In "Intuition Mongering" (2012, The Reasoner 6 (11):169-170), I argue by analogy with Appeals to Authority (AA) that the following is a necessary, but not a sufficient, condition for the strength of Appeals to Intuition (AI) made by professional philosophers: (PAI) When philosophers appeal to intuitions, there must be an agreement among the relevant philosophers concerning the intuition in question; otherwise, the appeal to intuition is weak. That is, an AI that fails to meet PAI is weak, in the sense that 'it seems to S that p' doesn't make p more likely to be true or probable, where S is a philosopher and other philosophers don't share S's seeming that p. Muñoz-Suárez (2014, "Should We Entitle Strong Appeals to Intuition?," The Reasoner 8 (7):7778) objects to the analogy between AA and AI by pointing out an alleged dissimilarity: the epistemic subject is relevant to the strength of AAs (e.g., she must be a genuine expert on the subject matter) but not so for the strength of AIs; as far as AIs are concerned, it doesn't matter who the intuiter is. Muñoz-Suárez (2014: 78) "suspect[s] that the strength of an AI relies on the intuitiveness of the proposition involved in its antecedent-i.e., 'p' in 'it seems that p'-rather than on some social fact like agreement either among experts or among laymen." Accordingly, my argument is about the AIs of philosophers in particular, whereas MuñozSuárez's objection is supposed to apply to AIs in general. Be that as it may, the question is whether Muñoz-Suárez is right about the following: (1) Unlike AAs, facts about the subject are irrelevant to the evaluation of the strength of AIs. (2) Unlike AAs, facts about the community (e.g., agreement/disagreement) are irrelevant to the evaluation of the strength of AIs. Of course, if AA and AI shared all properties in common, they wouldn't be analogous; they would be identical. So there must be at least one property that they don't share (that is the nature of analogies). But I don't think that AA and AI are dissimilar in the ways that Muñoz-Suárez alleges. Regarding (1), whether they are beliefs, inclinations, or experiences, seemings, as in 'it seems to S that p', or appearances, as in 'it appears to S that p', are mental states (Tucker, 2013, "Introduction," Seemings and Justification, ed., C. Tucker, NY: OUP, 3). A mental state must be the mental state of some subject, which is why the subject is relevant to the strength of AIs. That the subject is a focus of epistemic evaluation is an insight that is at the core of virtue epistemology (e.g., Greco, 2003,"Knowledge as Credit for True Belief," Intellectual Virtue, eds., M. DePaul and L. Zagzebski, NY: OUP, 111-134) and virtue argumentation theory (e.g., Aberdein, 2014, "In Defence of Virtue," Informal Logic 34 (1):77-93). Indeed, some defenders of AI in philosophy have argued that some intuiters are better than others. More precisely, critics of experimental philosophy have argued that philosophers are expert intuiters. These critics want to argue that it does matter who the intuiter is; more precisely, they claim that professional philosophers ("experts") are better intuiters than non-philosophers ("novices"). Regarding (2), the evidential role of intuitions is often defended by analogy with perception. Just as one is prima facie justified in believing that p when it seems visually to one that p, it is argued, one is prima facie justified in believing that p when it seems intellectually to one that p (Tucker 2013: 1-2). If this is correct, however, then there would be cases of misintuition, just as there are cases of misperception. How can one tell when one is having a veridical perception or a misperception? One way to tell is to ask other people. If I'm the only one who sees a pink elephant, it's likely that I'm hallucinating. Similarly, if I'm the only one to whom it seems that p, it's likely that I'm misintuiting. That the community is a focus of epistemic evaluation is again an insight that is at the core of virtue epistemology and argumentation theory. In argumentation theory, agreement is taken to be epistemically significant for the evaluation of AAs. For example, according to Walton (2014, "On a Razor's Edge," Argument & Computation 5 (2-3):139-159), one of the "critical questions" that need to be addressed when evaluating the strength of AAs is the "consistency question," i.e., whether what expert E asserts is consistent with what other experts assert. In the epistemology of disagreement, disagreement is taken to be epistemically significant insofar as it demands some belief-revision (or at least reasons why belief-revision is not warranted) on the part of an epistemic subject (Christensen and Lackey, 2013, "Introduction," The Epistemology of Disagreement, eds., D. Christensen and J. Lackey, NY: OUP, 1-3). Accordingly, social facts, such as whether others agree or not, are relevant to the evaluation of the strength of AIs. Some might think that Muñoz-Suárez doesn't have to defend (1) and (2) in order to undermine my argument for PAI. Instead, he merely has to show that the intuitiveness of p is more relevant to the strength of AIs than facts about the intuiter or the community. Even if Muñoz-Suárez did manage to show that, it wouldn't be enough to undermine my argument. For to say that X is more relevant than Y is to admit that Y is relevant, only less so. Hence, to say that facts about the intuiter or the community are less relevant than the intuitiveness (or insightfulness) of p is to say that such facts are relevant, only less so. But the claim that PAI is a necessary, but not a sufficient, condition for strong AIs is consistent with there being other relevant conditions that AIs must meet to be strong arguments. For his counterargument to succeed, then, Muñoz-Suárez must show that AIs aren't required to meet PAI to be strong arguments (cf. his (2*) on p. 77). Since PAI is about agreement among philosophers, Muñoz-Suárez must show that agreement is irrelevant to the strength of AIs. To sum up, if seemings are mental states, the subject is a proper focus of epistemic evaluation. If subjects can misintuit, in much the same way that subjects can misperceive, then the community is a proper focus of epistemic evaluation. If this is correct, then (1) and (2) are false; AA and AI aren't dissimilar in the ways that Muñoz-Suárez alleges, and thus my argument by analogy with AAs for PAI as a necessary condition for the strength of AIs still stands.