We can infer moral conclusions from nonmoral evidence using a three-step procedure. First, we distinguish the processes generating belief so that their reliability in generating true belief is statistically predictable. Second, we assess the processes’ reliability, perhaps by observing how frequently they generate true nonmoral belief or logically inconsistent beliefs. Third, we adjust our credence in moral propositions in light of the truth ratios of the processes generating beliefs in them. This inferential route involves empirically discovering truths of the form (...) “Process P, which generates belief in moral proposition M, has truth ratio T", and using them to discover probabilities for moral propositions. The inferential route is noncircular, and progress along it is driven fundamentally by induction. (shrink)
The generality problem is one of the most pressing challenges for process reliabilism about justification. Thus far, one of the more promising responses is James Beebe’s tri-level statistical solution. Despite the initial plausibility of Beebe’s approach, the tri-level statistical solution has been shown to generate implausible justification verdicts on a variety of cases. Recently, Samuel Kampa has offered a new statistical solution to the generality problem. Kampa argues that the new statistical solution overcomes the challenges that undermined Beebe’s original statistical (...) solution. However, there’s good reason to believe that Kampa is mistaken. In this paper, I show that Kampa’s new statistical solution faces problems that are no less serious than the original objections to Beebe’s solution. Depending on how we interpret Kampa’s proposal, the new statistical solution either types belief-forming processes far too narrowly, or the new statistical solution fails to clarify the epistemic implications of reliabilism altogether. Either way, the new statistical solution fails to make substantive progress towards solving the generality problem. (shrink)
This paper aims to show that Selim Berker’s widely discussed prime number case is merely an instance of the well-known generality problem for process reliabilism and thus arguably not as interesting a case as one might have thought. Initially, Berker’s case is introduced and interpreted. Then the most recent response to the case from the literature is presented. Eventually, it is argued that Berker’s case is nothing but a straightforward consequence of the generality problem, i.e., the problematic aspect of the (...) case for process reliabilism (if any) is already captured by the generality problem. (shrink)
Recently John Turri (2015b) has argued, contra the orthodoxy amongst epistemologists, that reliability is not a necessary condition for knowledge. From this result, Turri (2015a, 2017, 2016a, 2019) defends a new account of knowledge - called abilism - that allows for unreliable knowledge. I argue that Turri's arguments fail to establish that unreliable knowledge is possible and argue that Turri's account of knowledge is false because reliability must be a necessary condition for knowledge.
Matthew Frise claims that reliabilist theories of justification have a temporality problem—the problem of providing a principled account of the temporal parameters of a process’s performance that determine whether that process is reliable at a given time. Frise considers a representative sample of principled temporal parameters and argues that there are serious problems with all of them. He concludes that the prospects for solving the temporality problem are bleak. Importantly, Frise argues that the temporality problem constitutes a new reason to (...) reject reliabilism. On this point, I argue that Frise is mistaken. There are serious interpretive difficulties with Frise’s argument. In this essay, I show that there are principled and reasonable temporal parameters for the reliabilist to adopt that successfully undermine the interpretations of Frise’s argument that only invoke plausible premises. There are interpretations of Frise’s argument that leave reliabilism without a clear parameter solution. However, I argue that these interpretations invoke controversial premises that are at best unmotivated, and at worst they merely re-raise older disputes about reliabilism. In any event, the temporality problem fails to constitute a new reason to reject reliabilism. (shrink)
The World Wide Web has had a notable impact on a variety of epistemically-relevant activities, many of which lie at the heart of the discipline of knowledge engineering. Systems like Wikipedia, for example, have altered our views regarding the acquisition of knowledge, while citizen science systems such as Galaxy Zoo have arguably transformed our approach to knowledge discovery. Other Web-based systems have highlighted the ways in which the human social environment can be used to support the development of intelligent systems, (...) either by contributing to the provision of epistemic resources or by helping to shape the proﬁle of machine learning. In the present paper, such systems are referred to as ‘knowledge machines’. In addition to providing an overview of the knowledge machine concept, the present paper reviews a number of issues that are associated with the scientiﬁc and philosophical study of knowledge machines. These include the potential impact of knowledge machines on the theory and practice of knowledge engineering, the role of social participation in the realization of intelligent systems, and the role of standardized, semantically-enriched data formats in supporting the ad hoc assembly of special-purpose knowledge systems and knowledge processing pipelines. (shrink)
Proper functionalism claims that a belief has epistemic warrant only if it’s formed according to the subject’s truth-aimed cognitive design plan. The most popular putative counter-examples to proper functionalism all involve agents who form beliefs in seemingly warrant-enabling ways that don’t appear to proceed according to any sort of design. The Swampman case is arguably the most famous scenario of this sort. However, some proper functionalists accept that subjects like Swampman have warrant, opting instead to adopt a non-standard account of (...) design. But critics of proper functionalism hold that this strategy comes at a high cost: the design-plan condition now seems explanatorily superfluous. James Taylor construes cases like Swampman as posing a dilemma for the proper functionalist: either deny warrant in these cases and concede that proper functionalism doesn’t capture our intuitions, or affirm warrant and undermine the explanatory power of the design-plan condition. Proper functionalists have replied to both horns of this dilemma. Recently, Kenny Boyce and Andrew Moon have argued that warrant-affirming intuitions on cases like Swampman are motivated by a principle that has a clear counter-example. Also, Alvin Plantinga presents a set of cases that supposedly cause problems for any analysis of warrant that lacks a design-plan condition. In this essay, I present a counter-argument to Boyce and Moon’s argument, and show that a more robust reliability condition can accommodate Plantinga’s problem cases. I conclude that we’re left with no good reason to doubt that cases like Swampman raise a troubling dilemma for the proper functionalist. (shrink)
The New Evil Demon Problem is meant to show that reliabilism about epistemic justification is incompatible with the intuitive idea that the external-world beliefs of a subject who is the victim of a Cartesian demon could be epistemically justified. Here, I present a new argument that such beliefs can be justified on reliabilism. Whereas others have argued for this conclusion by making some alterations in the formulation of reliabilism, I argue that, as far as the said problem is concerned, such (...) alterations are redundant. No reliabilist should fear the demon. (shrink)
Vice epistemology, as Quassim Cassam understands it, is the study of the nature, identity, and significance of the epistemic vices. But what makes an intellectual vice a vice? Cassam calls his own view “Obstructivism” – intellectual vices are those traits, thinking styles, or attitudes that systematically obstruct the acquisition, retention, and transmission of knowledge. -/- I shall argue that Cassam’s account is an improvement upon virtue-reliabilism, and that it fares better against what I call Montmarquet’s objection than its immediate rivals. (...) Nevertheless, I contend that it does not go far enough — Montmarquet’s objection stands. -/- I conclude that either the objection needs to be answered in some other way, or else proponents of Obstructivism need to explain why their account of the nature of the intellectual vices does not have the counterintuitive consequences it appears to have. Alternatively, another account of the nature of the intellectual vices needs to be sought. (shrink)
Hilary Kornblith’s book is motivated by the conviction that philosophers have tended to overvalue and overemphasize reflection in their accounts of central philosophical phenomena. He seeks to pinpoint this tendency and to correct it. -/- Kornblith’s claim is not without precedent. It is an oft-repeated theme of 20th-century philosophy that philosophers have tended to give ‘overly intellectualized’ accounts of important phenomena. One thinks here of Wittgenstein, Ryle and many others. -/- One version of this charge is that philosophers have tended (...) to appeal to higher-order thoughts when first-order thoughts about the world are all that’s needed. -/- A more specific version of this charge is that philosophers have tended to appeal to second-order thoughts with normative, or quasi-normative, contents when all that’s needed are first-order thoughts with factual contents. -/- It is this second version of the charge that Kornblith is particularly interested in pressing. Although he doesn’t spell it out, the connection between this project and Kornblith’s previous work on naturalistic conceptions of epistemology should be fairly obvious. Very roughly, if you want humans to look a lot closer to the lower animals, then you’d better think that most central human abilities can be explained without appeal to reflection and without appeal to normative thought. -/- What’s good and important about Kornblith’s book is that he gives this charge a sustained and illuminating treatment. He looks in detail at accounts of knowledge, reasoning, epistemic agency, free will and normativity; he identifies sympathetically some of the temptations to think that we must resort to second-order resources to explain these phenomena; and he attempts to show that the appeal never works and is, in any case, not needed, since first-order accounts manage very well. (shrink)
Goldman, though still a reliabilist, has made some recent concessions to evidentialist epistemologies. I agree that reliabilism is most plausible when it incorporates certain evidentialist elements, but I try to minimize the evidentialist component. I argue that fewer beliefs require evidence than Goldman thinks, that Goldman should construe evidential fit in process reliabilist terms, rather than the way he does, and that this process reliabilist understanding of evidence illuminates such important epistemological concepts as propositional justification, ex ante justification, and defeat.
Cognitive penetration of perception is the idea that what we see is influenced by such states as beliefs, expectations, and so on. A perceptual belief that results from cognitive penetration may be less justified than a nonpenetrated one. Inferentialism is a kind of internalist view that tries to account for this by claiming that some experiences are epistemically evaluable, on the basis of why the perceiver has that experience, and the familiar canons of good inference provide the appropriate standards by (...) which experiences are evaluated. I examine recent defenses of inferentialism by Susanna Siegel, Peter Markie, and Matthew McGrath and argue that the prospects for inferentialism are dim. (shrink)
Contemporary philosophers nearly unanimously endorse knowledge reliabilism, the view that knowledge must be reliably produced. Leading reliabilists have suggested that reliabilism draws support from patterns in ordinary judgments and intuitions about knowledge, luck, reliability, and counterfactuals. That is, they have suggested a proto-reliabilist hypothesis about “commonsense” or “folk” epistemology. This paper reports nine experimental studies (N = 1262) that test the proto-reliabilist hypothesis by testing four of its principal implications. The main findings are that (a) commonsense fully embraces the possibility (...) of unreliable knowledge, (b) knowledge judgments are surprisingly insensitive to information about reliability, (c) “anti-luck” intuitions about knowledge have nothing to do with reliability specifically, and (d) reliabilists have mischaracterized the intuitive counterfactual properties of knowledge and their relation to reliability. When combined with the weakness of existing arguments for reliabilism and the recent emergence of well supported alternative views that predict the widespread existence of unreliable knowledge, the present findings are the final exhibit in a conclusive case for abandoning reliabilism in epistemology. I introduce an alternative theory of knowledge, abilism, which out-performs reliabilism and well explains all the available evidence. (shrink)
We often evaluate belief-forming processes, agents, or entire belief states for reliability. This is normally done with the assumption that beliefs are all-or-nothing. How does such evaluation go when we’re considering beliefs that come in degrees? I consider a natural answer to this question that focuses on the degree of truth-possession had by a set of beliefs. I argue that this natural proposal is inadequate, but for an interesting reason. When we are dealing with all-or-nothing belief, high reliability leads to (...) high levels of truth-possession. However, when it comes to degrees of belief, reliability and truth-possession part ways. The natural answer thus fails to be a good way to evaluate degrees of belief for reliability. I propose and develop an alternative method based on the notion of calibration, suggested by Frank Ramsey, which does not have this problem and consider why we should care about such assessments of reliability even if they are not tied directly to truth-possession. (shrink)
Collin Howson (2000) challenges van Cleve’s reliabilist defense of induction (1984) based on an adaptation of Goodman Paradox (or new riddle of induction). I will try to show that Howson’s argument does not succeed once it is self-defeating. Nevertheless, I point out another way which Howson could have employed the new riddle to undermine the reliabilist defense.
According to epistemic internalism, the only facts that determine the justificational status of a belief are facts about the subject’s own mental states, like beliefs and experiences. Externalists instead hold that certain external facts, such as facts about the world or the reliability of a belief-producing mechanism, affect a belief’s justificational status. Some internalists argue that considerations about evil demon victims and brains in vats provide excellent reason to reject externalism: because these subjects are placed in epistemically unfavorable settings, externalism (...) seems unable to account for the strong intuition that these subjects’ beliefs are nonetheless justified. I think these considerations do not at all help the internalist cause. I argue that by appealing to the anti-individualistic nature of perception, it can be shown that skeptical scenarios provide no reason to prefer internalism to externalism. (shrink)
Is perception cognitively penetrable, and what are the epistemological consequences if it is? I address the latter of these two questions, partly by reference to recent work by Athanassios Raftopoulos and Susanna Seigel. Against the usual, circularity, readings of cognitive penetrability, I argue that cognitive penetration can be epistemically virtuous, when---and only when---it increases the reliability of perception.
This paper explores what constitutes reliability in persons, particularly intellectual reliability. It considers global reliability , the overall reliability of persons, encompassing both the theoretical and practical realms; sectorial reliability , that of a person in a subject-matter (or behavioral) domain; and focal reliability , that of a particular element, such as a belief. The paper compares reliability with predictability of the kind most akin to it and distinguishes reliability as an intellectual virtue from reliability as an intellectual power. The (...) paper also connects reliability with insight, reasoning, knowledge, and trust. It is argued that insofar as reliability is an intellectual virtue, it must meet both external standards of correctitude and internal standards of justification. (shrink)
This paper discusses two versions of reliabilism: modal and probabilistic reliabilism. Modal reliabilism faces the problem of the missing closeness metric for possible worlds while probalistic reliabilism faces the problem of the relevant reference class. Despite the severity of these problems, reliabilism is still very plausible (also for independent reasons). I propose to stick with reliabilism, propose a contextualist (or, alternatively, harmlessly relativist) solution to the above problems and suggest that probabilistic reliabilism has the advantage over modal reliabilism.
It can often be heard in the hallways, and occasionally read in print, that reliabilism runs into special trouble regarding lottery cases. My main aim in this paper is to argue that this is not so. Nevertheless, lottery cases do force us to pay close attention to the relation between justification and probability.
Colin Howson argues that (1) my sociologistic reliabilism sheds no light on the objectivity of epistemic content, and that (2) sorites does not threaten the reliability of modus ponens . I reply that argument (1) misrepresents my position, and that argument (2) is beside the point.
Epistemic luck has been the focus of much discussion recently. Perhaps the most general knowledge-precluding type is veritic luck, where a belief is true but might easily have been false. Veritic luck has two sources, and so eliminating it requires two distinct conditions for a theory of knowledge. I argue that, when one sets out those conditions properly, a solution to the generality problem for reliabilism emerges.
Strategic Reliabilism is a framework that yields relative epistemic evaluations of belief-producing cognitive processes. It is a theory of cognitive excellence, or more colloquially, a theory of reasoning excellence (where 'reasoning' is understood very broadly as any sort of cognitive process for coming to judgments or beliefs). First introduced in our book, Epistemology and the Psychology of Human Judgment (henceforth EPHJ), the basic idea behind SR is that epistemically excellent reasoning is efficient reasoning that leads in a robustly reliable fashion (...) to significant, true beliefs. It differs from most contemporary epistemological theories in two ways. First, it is not a theory of justification or knowledge – a theory of epistemically worthy belief. Strategic Reliabilism is a theory of epistemically worthy ways of forming beliefs. And second, Strategic Reliabilism does not attempt to account for an epistemological property that is assumed to be faithfully reflected in the epistemic judgments and intuitions of philosophers. If SR makes recommendations that accord with our reflective epistemic judgments and intuitions, great. If not, then so much the worse for our reflective epistemic judgments and intuitions. (shrink)
In their recent book, Epistemology and the Psychology of Human Judgment, Michael Bishop and J.D. Trout have challenged Standard Analytic Epistemology (SAE) in all its guises and have endorsed a version of the "replacement thesis"--proponents of which aim at replacing the standard questions of SAE with psychological questions. In this article I argue that Bishop and Trout offer an incomplete epistemology that, as formulated, cannot address many of the core issues that motivate interest in epistemological questions to begin with, and (...) so is not a fit replacement. (shrink)
It is alleged that the causal inertness of abstract objects and the causal conditions of certain naturalized epistemologies precludes the possibility of mathematical know- ledge. This paper rejects this alleged incompatibility, while also maintaining that the objects of mathematical beliefs are abstract objects, by incorporating a naturalistically acceptable account of ‘rational intuition.’ On this view, rational intuition consists in a non-inferential belief-forming process where the entertaining of propositions or certain contemplations results in true beliefs. This view is free of any (...) conditions incompatible with abstract objects, for the reason that it is not necessary that S stand in some causal relation to the entities in virtue of which p is true. Mathematical intuition is simply one kind of reliable process type, whose inputs are not abstract numbers, but rather, contemplations of abstract numbers. (shrink)
In order to shed light on the question of whether reliabilism entails or excludes certain kinds of truth theories, I examine two arguments that purport to establish that reliabilism cannot be combined with antirealist and epistemic theories of truth. I take antirealism about truth to be the denial of the recognition-transcendence of truth, and epistemic theories to be those that identify truth with some kind of positive epistemic status. According to one argument, reliabilism and antirealism are incompatible because the former (...) takes epistemic justification to be recognition-transcendent in a certain sense that conflicts with the latter's denial of the recognition-transcendence of truth. I show that, because the recognition-transcendence of reliabilist justification is significantly weaker than the recognition-transcendence required by a realist conception of truth, antirealist theories of truth that deny the strong transcendence of truth do not threaten the externalist character of reliabilism. According to the second argument, reliabilism cannot be combined with an epistemic truth theory because reliabilists analyze positive epistemic status in terms of truth but epistemic theorists analyze truth in terms of positive epistemic status. However, I argue that reliabilists who wish to adopt an epistemic theory of truth can avoid circularity by appealing to a multiplicity of positive epistemic statuses. (shrink)
Virtue reliabilism appears to have a major advantage over generic reliabilism: only the former has the resources to explain the intuition that knowledge is more valuable than mere true belief. I argue that this appearance is illusory. It is sustained only by the misguided assumption that a principled distinction can be drawn between those belief-forming methods that are grounded in the agent’s intellectual virtues, and those that are not. A further problem for virtue reliabilism is that of explaining why knowledge (...) is more valuable than mere justified true belief. I argue that virtue reliabilism lacks the resources to explain this value difference. I conclude by considering what it would take for a theory to explain the extra value of knowledge over mere justified true belief. (shrink)
Scientific measurements are made objective through the use of reliable instruments. Instruments can have this function because they can - as material objects - be investigated independently of the specific measurements at hand. However, their materiality appears to be crucial for the assessment of their reliability. The usual strategies to investigate an instrument’s reliability depend on and assume possibilities of control, and control is usually specified in terms of materiality of the instrument and environment. The aim of this paper is (...) to investigate the problem of reliability for non-material instruments, the instruments being applied in the social sciences. Any possible lack of reliability of the instrument hinders the measurements of ever becoming objective. (shrink)
Why has Thomas Reid’s philosophy been neglected? One answer to this question might cite Reid’s treatment by critics of his day. But Reid may also have been neglected because his terminology suggests a kind of quaint, naive dogmatism: a “philosophy of common sense” might belong to a philosopher who resists skepticism by just saying “no” to all that fancy philosophizing. Indeed, Reid tells us in the Inquiry: “I despise Philosophy, and renounce its guidance, let my soul dwell with Common Sense.” (...) But Reid’s announcement holds only if skepticism can’t be refuted, and what Reid takes himself to have done is precisely that: refute skepticism. Philip de Bary’s Thomas Reid and Scepticism: His Reliabilist Response is an admirable account of Reid’s strongly philosophical response to skepticism. (shrink)
John Greco's Putting Skeptics in their Place presents an illuminating perspective on the nature of the skeptical problem and how to respond to it. Building on Ernest Sosa's Virtue Epistemology, Greco develops an account of knowledge he calls, “Agent Reliabilism”. In this essay, I will take up several issues regarding the details of this account.
This book revives inductive logic by bringing out the underlying epistemology. The resulting structural reliabilist theory propounds the view that justification supervenes on syntactic and semantic properties of sentences as justification-bearers. It is claimed to set up a genuine alternative to the prevailing theories of justification. Kawalec substantiates this claim by confronting structural reliabilism with a number of epistemological problems. While the book is addressed to both professionals and students of philosophical logic, probability, epistemology, and philosophy of science, it also (...) surveys ideas central to the development of philosophy in the 20th century. It will be a valuable companion to multifarious graduate and postgraduate courses. (shrink)
We propose to extend a reliabilist perspective from epistemology to the very concept of rational justification. Rationality is defined as a cognitive virtue contextually relative to an information domain, to the mean performance of a cognitive community, and to normal conditions of information gathering. This proposal answers to the skeptical position derived from the evidence of the cognitive fallacies and, on the other hand, is consistent with the ecological approach to the cognitive biases. Rationality is conceived naturalistically as a control (...) system of the flow of information: reliabilism is the approach that qualifies this system as virtuous. There can be specific-domain devices selected by evolution, although the constraints of the very flow of information can be also represented, even with imperfect means of formalization, and then rationality becomes reflective. Reliable rationality is postulated in conclusion as a more philosophically abstract concept than maximal, minimal or bounded rationality. (shrink)
I consider whether one particular anti-individualist claim, the doctrine of object-dependent thoughts (DODT), is compatible with the Principle of Privileged Access, or PPA, which states that, in general, a subject can have non-empirical knowledge of her thought contents. The standard defence of the compatibility of anti-individualism and PPA emphasises the reliability of the process which produces a subject's second order beliefs about her thought contents. I examine whether this defence can be applied to DODT, given that DODT generates the possibility (...) of illusions of thought. Drawing on general epistemological literature, I distinguish several senses of reliability, and argue that in the relevant sense-'global reliability'-DODT does sometimes threaten reliability and hence PPA. (shrink)
It has been suggested, recently and not so recently, by a number of analytic epistemologists that reliabilist and externalist accounts of justification and knowledge are inadequate responses to the goals of traditional epistemology and other goals of inquiry. But philosophers of science decry reliabilism and externalism because they are connected to traditional, analytic epistemology, an outmoded and utopian form of inquiry. Clearly, both groups of critics cannot be right. I think both groups are guilty of conceptual confusions that, once clarified, (...) should allow the naturalization project to stand forth in a rather attractive light. (shrink)
Experimental data are often acclaimed on the grounds that they can be consistently generated. They are, it is said, reproducible. In this paper I describe how this feature of experimental-data (their pragmatic reliability) leads to their epistemic worth (their epistemic reliability). An important part of my description is the supposition that experimental procedures are to certain extent fixed and stable. Various illustrations from the actual practice of science are introduced, the most important coming at the end of the paper with (...) a discussion of Ray Davis' 1967 solar-neutrino detection experiment (as it is portrayed in Pinch, 1980). (shrink)
This article asks the question, ``what is reliable machine learning?'' As I intend to answer it, this is a question about epistemic justification. Reliable machine learning gives justification for believing its output. Current approaches to reliability (e.g., transparency) involve showing the inner workings of an algorithm (functions, variables, etc.) and how they render outputs. We then have justification for believing the output because we know how it was computed. Thus, justification is contingent on what can be shown about the algorithm, (...) its properties, and its behavior. In this paper, I defend computational reliabilism (CR). CR is a computationally-inspired off-shoot of process reliabilism that does not require showing the inner workings of an algorithm. CR credits reliability to machine learning by identifying reliability indicators external to the algorithm (validation methods, knowledge-based integration, etc.). Thus, we have justification for believing the output of machine learning when we have identified the appropriate reliability indicators. CR is advanced as a more suitable epistemology for machine learning. The main goal of this article is to lay the groundwork for CR, how it works, and what we can expect as a justificatory framework for reliable machine learning. (shrink)