This book offers solutions to two persistent and I believe closely related problems in epistemology. The first problem is that of drawing a principled distinction between perception and inference: what is the difference between seeing that something is the case and merely believing it on the basis of what we do see? The second problem is that of specifying which beliefs are epistemologically basic (i.e., directly, or noninferentially, justified) and which are not. I argue that what makes a belief a (...) perceptual belief, or a basic belief, is not any introspectible feature of the belief but rather the nature of the cognitive system, or "module", that is causally responsible for the belief. Thus, even zombies, who in the philosophical literature lack conscious experiences altogether, can have basic, justified, perceptual beliefs. -/- The theories of perceptual and basic beliefs developed in the monograph are embedded in a larger reliabilist epistemology. I use this theory of basic beliefs to develop a detailed reliabilist theory: Inferentialist Reliabilism, which offers a reliabilist theory of inferential justification and which solves some longstanding problems for other reliabilist views by demanding inferential support for some—but not all—beliefs. The book is an instance of a thoroughgoingly naturalistic approach to epistemology. Many of my arguments have an empirical basis, and the view that I endorse reserves a central role for the cognitive sciences—in particular, cognitive neuroscience—in filling in the details of an applied epistemological theory. (shrink)
Is perception cognitively penetrable, and what are the epistemological consequences if it is? I address the latter of these two questions, partly by reference to recent work by Athanassios Raftopoulos and Susanna Seigel. Against the usual, circularity, readings of cognitive penetrability, I argue that cognitive penetration can be epistemically virtuous, when---and only when---it increases the reliability of perception.
The paper offers a solution to the generality problem for a reliabilist epistemology, by developing an “algorithm and parameters” scheme for type-individuating cognitive processes. Algorithms are detailed procedures for mapping inputs to outputs. Parameters are psychological variables that systematically affect processing. The relevant process type for a given token is given by the complete algorithmic characterization of the token, along with the values of all the causally relevant parameters. The typing that results is far removed from the typings of folk (...) psychology, and from much of the epistemology literature. But it is principled and empirically grounded, and shows good prospects for yielding the desired epistemological verdicts. The paper articulates and elaborates the theory, drawing out some of its consequences. Toward the end, the fleshed-out theory is applied to two important case studies: hallucination and cognitive penetration of perception. (shrink)
The New Evil Demon Problem is supposed to show that straightforward versions of reliabilism are false: reliability is not necessary for justification after all. I argue that it does no such thing. The reliabilist can count a number of beliefs as justified even in demon worlds, others as unjustified but having positive epistemic status nonetheless. The remaining beliefs---primarily perceptual beliefs---are not, on further reflection, intuitively justified after all. The reliabilist is right to count these beliefs as unjustified in demon worlds, (...) and it is a challenge for the internalist to be able to do so as well. (shrink)
Cognitive penetration of perception is the idea that what we see is influenced by such states as beliefs, expectations, and so on. A perceptual belief that results from cognitive penetration may be less justified than a nonpenetrated one. Inferentialism is a kind of internalist view that tries to account for this by claiming that some experiences are epistemically evaluable, on the basis of why the perceiver has that experience, and the familiar canons of good inference provide the appropriate standards by (...) which experiences are evaluated. I examine recent defenses of inferentialism by Susanna Siegel, Peter Markie, and Matthew McGrath and argue that the prospects for inferentialism are dim. (shrink)
Can beliefs that are not consciously formulated serve as part of an agent's evidence for other beliefs? A common view says no, any belief that is psychologically immediate is also epistemically immediate. I argue that some unconscious beliefs can serve as evidence, but other unconscious beliefs cannot. Person-level beliefs can serve as evidence, but subpersonal beliefs cannot. I try to clarify the nature of the personal/subpersonal distinction and to show how my proposal illuminates various epistemological problems and provides a principled (...) framework for solving other problems. (shrink)
The “looks” of things are frequently invoked (a) to account for the epistemic status of perceptual beliefs and (b) to distinguish perceptual from inferential beliefs. ‘Looks’ for these purposes is normally understood in terms of a perceptual experience and its phenomenal character. Here I argue that there is also a nonexperiential sense of ‘looks’—one that relates to cognitive architecture, rather than phenomenology—and that this nonexperiential sense can do the work of (a) and (b).
Outside of philosophy, ‘intuition’ means something like ‘knowing without knowing how you know’. Intuition in this broad sense is an important epistemological category. I distinguish intuition from perception and perception from perceptual experience, in order to discuss the distinctive psychological and epistemological status of evaluative property attributions. Although it is doubtful that we perceptually experience many evaluative properties and also somewhat unlikely that we perceive many evaluative properties, it is highly plausible that we intuit many instances of evaluative properties as (...) such. The resulting epistemological status of evaluative property attributions is very much like it would be if we literally perceived such properties. (shrink)
Much of the intuitive appeal of evidentialism results from conflating two importantly different conceptions of evidence. This is most clear in the case of perceptual justification, where experience is able to provide evidence in one sense of the term, although not in the sense that the evidentialist requires. I argue this, in part, by relying on a reading of the Sellarsian dilemma that differs from the version standardly encountered in contemporary epistemology, one that is aimed initially at the epistemology of (...) introspection but which generalizes to theories of perceptual justification as well. (shrink)
Epistemic defeat is standardly understood in either evidentialist or responsibilist terms. The seminal treatment of defeat is an evidentialist one, due to John Pollock, who famously distinguishes between undercutting and rebutting defeaters. More recently, an orthogonal distinction due to Jennifer Lackey has become widely endorsed, between so-called doxastic (or psychological) and normative defeaters. We think that neither doxastic nor normative defeaters, as Lackey understands them, exist. Both of Lackey’s categories of defeat derive from implausible assumptions about epistemic responsibility. Although Pollock’s (...) evidentialist view is superior, the evidentialism per se can be purged from it, leaving a general structure of defeat that can be incorporated in a reliabilist theory that is neither evidentialist nor responsibilist in any way. (shrink)
An influential argument for anti-reductionism about testimony, due to CAJ Coady, fails, because it assumes that an inductive global defense of testimony would proceed along effectively behaviorist lines. If we take seriously our wealth of non-testimonially justified folk psychological beliefs, the prospects for inductivism and reductionism look much better.
Raftopoulos’s most recent book argues, among other things, for the cognitive impenetrability of early vision. Before we can assess any such claims, we need to know what’s meant by “early vision” and by “cognitive penetration”. In this contribution to this book symposium, I explore several different things that one might mean – indeed, that Raftopoulos might mean – by these terms. I argue that whatever criterion we choose for delineating early vision, we need a single criterion, not a mishmash of (...) distinct criteria. And I argue against defining cognitive penetration in partly epistemological terms, although it is fine to offer epistemological considerations in defending some definitions as capturing something of independent interest. Finally, I raise some questions about how we are to understand the “directness” of certain putative cognitive influences on perception and about whether there’s a decent rationale for restricting directness in the way that Raftopoulos apparently does. (shrink)
To what extent are cognitive capacities, especially perceptual capacities, informationally encapsulated and to what extent are they cognitively penetrable? And why does this matter? Two reasons we care about encapsulation/penetrability are: (a) encapsulation is sometimes held to be definitional of modularity, and (b) penetrability has epistemological implications independent of modularity. I argue that modularity does not require encapsulation; that modularity may have epistemological implications independently of encapsulation; and that the epistemological implications of the cognitive penetrability of perception are messier than (...) is sometimes thought. (shrink)
I defend a moderate (neither extremely conservative nor extremely liberal) view about the contents of perception. I develop an account of perceptual kinds as perceptual similarity classes, which are convex regions in similarity space. Different perceivers will enjoy different perceptual kinds. I argue that for any property P, a perceptual state of O can represent something as P only if P is coextensive with some perceptual kind for O. 'Dog' and 'chair' will be perceptual kinds for most normal people, 'blackpool (...) warbler' for the expert birdwatcher but not for the rest of us, 'dangerous', 'familiar', or 'meaning that the cat is on the mat' for none of us. (shrink)
The Sellarsian dilemma is a famous argument that attempts to show that nondoxastic experiential states cannot confer justification on basic beliefs. The usual conclusion of the Sellarsian dilemma is a coherentist epistemology, and the usual response to the dilemma is to find it quite unconvincing. By distinguishing between two importantly different justification relations (evidential and nonevidential), I hope to show that the Sellarsian dilemma, or something like it, does offer a powerful argument against standard nondoxastic foundationalist theories. But this reconceived (...) version of the argument does not support coherentism. Instead, I use it to argue for a strongly externalist epistemology. (shrink)
Goldman, though still a reliabilist, has made some recent concessions to evidentialist epistemologies. I agree that reliabilism is most plausible when it incorporates certain evidentialist elements, but I try to minimize the evidentialist component. I argue that fewer beliefs require evidence than Goldman thinks, that Goldman should construe evidential fit in process reliabilist terms, rather than the way he does, and that this process reliabilist understanding of evidence illuminates such important epistemological concepts as propositional justification, ex ante justification, and defeat.
This innovative text is psychologically informed, both in its diagnosis of inferential errors, and in teaching students how to watch out for and work around their natural intellectual blind spots. It also incorporates insights from epistemology and philosophy of science that are indispensable for learning how to evaluate premises. The result is a hands-on primer for real world critical thinking. The authors bring a fresh approach to the traditional challenges of a critical thinking course: effectively explaining the nature of validity, (...) assessing deductive arguments, reconstructing, identifying and diagramming arguments, and causal and probabilistic inference. Additionally, they discuss in detail, important, frequently neglected topics, including testimony, including the evaluation of news and other information sources, the nature and credibility of science, rhetoric, and dialectical argumentation. The treatment of probability uses frequency trees and a frequency approach more generally, and argument reconstruction is taught using argument maps; these methods have been shown to improve students’ reasoning and argument evaluation. (shrink)
The cognitive neuropsychological understanding of a cognitive system is roughly that of a ‘mental organ’, which is independent of other systems, specializes in some cognitive task, and exhibits a certain kind of internal cohesiveness. This is all quite vague, and I try to make it more precise. A more precise understanding of cognitive systems will make it possible to articulate in some detail an alternative to the Fodorian doctrine of modularity (since not all cognitive systems are modules), but it will (...) also provide a better understanding of what a module is (since all modules are cognitive systems). (shrink)
Stewart Cohen argues that much contemporary epistemological theorizing is hampered by the fact that ‘epistemic justification’ is a term of art and one that is never given any serious explication in a non-tendentious, theory-neutral way. He suggests that epistemologists are therefore better off theorizing in terms of rationality, rather than in terms of ‘epistemic justification’. Against this, I argue that even if the term ‘epistemic justification’ is not broadly known, the concept it picks out is quite familiar, and partly because (...) it’s a term of art, justification talk is a better vehicle for philosophical theorizing. ‘Rational’ is too unclear for our philosophical purposes, and the fact that ‘epistemic justification’ gets fleshed out by appeal to substantive, controversial theses is no obstacle to its playing the needed role in epistemological theorizing. (shrink)
Morphological content (MC) is content that is implicit in the standing structure of the cognitive system. Henderson and Horgan claim that MC plays a distinctive epistemological role unrecognized by traditional epistemic theories. I consider the possibilities that MC plays this role either in central cognition or in peripheral modules. I argue that the peripheral MC does not play an interesting epistemological role and that the central MC is already recognized by traditional theories.
Recent worries about possible epiphenomenalist consequences of nonreductive materialism are misplaced, not, as many have argued, because nonreductive materialism does not have epiphenomenalist implications but because the epiphenomenalist implications are actually virtues of the theory, rather than vices. It is only by showing how certain kinds of mental properties are causally impotent that cognitive scientific explanations of mentality as we know them are possible.
The traditional understanding of analyticity in terms of concept containment is revisited, but with a concept explicitly understood as a certain kind of mental representation and containment being read correspondingly literally. The resulting conception of analyticity avoids much of the vagueness associated with attempts to explicate analyticity in terms of synonymy by moving the locus of discussion from the philosophy of language to the philosophy of mind. The account provided here illustrates some interesting features of representations and explains, at least (...) in part, the special epistemic status of analytic judgments. (shrink)
An examination of the role played by general rules in Hume's positive (nonskeptical) epistemology. General rules for Hume are roughly just general beliefs. The difference between justified and unjustified belief is a matter of the influence of good versus bad general rules, the good general rules being the "extensive" and "constant" ones.
A short discussion piece arguing that the neuropsychological phenomenon of double dissociations is most revealing of underlying cognitive architecture because of the capacities that are spared, more than the capacities that are lost.
In some recent work, Ernest Sosa rejects the “perceptual model” of rational intuition, according to which intuitions (beliefs formed by intuition) are justified by standing in the appropriate relation to a nondoxastic intellectual experience (a seeming-true, or the like), in much the way that perceptual beliefs are often held to be justified by an appropriate relation to nondoxastic sense experiential states. By extending some of Sosa’s arguments and adding a few of my own, I argue that Sosa is right to (...) reject the perceptual model of intuition, and that we should reject the “perceptual model” of perception as well. Rational intuition and perception should both receive a virtue theoretic (e.g., reliabilist) account, rather than an evidentialist one. To this end, I explicitly argue against the Grounds Principle, which holds that all justified beliefs must be based on some adequate reason, or ground. (shrink)
Part of a book symposium on Ernest Sosa's Knowing Full Well. An important feature of Sosa's epistemology is his distinction between animal knowledge and reflective knowledge. What exactly is reflective knowledge, and how is it superior to animal knowledge? Here I try to get clearer on what Sosa might mean by reflective knowledge and what epistemic role it is supposed to play.
Nearly everyone agrees that perception gives us justification and knowledge, and a great number of epistemologists endorse a particular two-part view about how this happens. The view is that perceptual beliefs get their justification from perceptual experiences, and that they do so by being based on them. Despite the ubiquity of these two views, I think that neither has very much going for it; on the contrary, there’s good reason not to believe either one of them.