Bishop and Trout here present a unique and provocative new approach to epistemology. Their approach aims to liberate epistemology from the scholastic debates of standard analytic epistemology, and treat it as a branch of the philosophy of science. The approach is novel in its use of cost-benefit analysis to guide people facing real reasoning problems and in its framework for resolving normative disputes in psychology. Based on empirical data, Bishop and Trout show how people can improve their reasoning by relying (...) on Statistical Prediction Rules. They then develop and articulate the positive core of the book. Their view, Strategic Reliabilism, claims that epistemic excellence consists in the efficient allocation of cognitive resources to reliable reasoning strategies, applied to significant problems. The last third of the book develops the implications of this view for standard analytic epistemology; for resolving normative disputes in psychology; and for offering practical, concrete advice on how this theory can improve real people's reasoning. This is a truly distinctive and controversial work that spans many disciplines and will speak to an unusually diverse group, including people in epistemology, philosophy of science, decision theory, cognitive and clinical psychology, and ethics and public policy. (shrink)
Science and philosophy study well-being with different but complementary methods. Marry these methods and a new picture emerges: To have well-being is to be "stuck" in a positive cycle of emotions, attitudes, traits and success. This book unites the scientific and philosophical worldviews into a powerful new theory of well-being.
The generality problem is widely considered to be a devastating objection to reliabilist theories of justification. My goal in this paper is to argue that a version of the generality problem applies to all plausible theories of justification. Assume that any plausible theory must allow for the possibility of reflective justification—S's belief, B, is justified on the basis of S's knowledge that she arrived at B as a result of a highly (but not perfectly) reliable way of reasoning, R. The (...) generality problem applies to all cases of reflective justification: Given that is the product of a process-token that is an instance of indefinitely many belief-forming process-types (or BFPTs), why is the reliability of R, rather than the reliability of one of the indefinitely many other BFPTs, relevant to B's justificatory status? This form of the generality problem is restricted because it applies only to cases of reflective justification. But unless it is solved, the generality problem haunts all plausible theories of justification, not just reliabilist ones. (shrink)
During the last 25 years, researchers studying human reasoning and judgment in what has become known as the “heuristics and biases” tradition have produced an impressive body of experimental work which many have seen as having “bleak implications” for the rationality of ordinary people (Nisbett and Borgida 1975). According to one proponent of this view, when we reason about probability we fall victim to “inevitable illusions” (Piattelli-Palmarini 1994). Other proponents maintain that the human mind is prone to “systematic deviations from (...) rationality” (Bazerman & Neale 1986) and is “not built to work by the rules of probability” (Gould 1992). It has even been suggested that human beings are “a species that is uniformly probability-blind” (Piattelli-Palmarini 1994). This provocative and pessimistic interpretation of the experimental findings has been challenged from many different directions over the years. One of the most recent and energetic of these challenges has come from the newly emerging field of evolutionary psychology, where it has been argued that it’s singularly implausible to claim that our species would have evolved with no “instinct for probability” and, hence, be “blind to chance” (Pinker 1997, 351). Though evolutionary psychologists concede that it is possible to design experiments that “trick our probability.. (shrink)
Are thought experiments nothing but arguments? I argue that it is not possible to make sense of the historical trajectory of certain thought experiments if one takes them to be arguments. Einstein and Bohr disagreed about the outcome of the clock-in-the-box thought experiment, and so they reconstructed it using different arguments. This is to be expected whenever scientists disagree about a thought experiment's outcome. Since any such episode consists of two arguments but just one thought experiment, the thought experiment cannot (...) be the arguments. (shrink)
Epistemic responsibility involves at least two central ideas. (V) To be epistemically responsible is to display the virtue(s) epistemic internalists take to be central to justification (e.g., coherence, having good reasons, fitting the evidence). (C) In normal (non-skeptical)circumstances and in thelong run, epistemic responsibility is strongly positively correlated with reliability. Sections 1 and 2 review evidence showing that for a wide range of real-world problems, the most reliable, tractable reasoning strategies audaciously flout the internalist''s epistemic virtues. In Section 3, I (...) argue that these results force us to give up either (V), our current conception of what it is to be epistemically responsible, or (C) the responsibility-reliability connection. I will argue that we should relinquish (V). This is likely to reshape our epistemic practices. It will force us to alter our epistemic judgments about certain instances of reasoning, to endorse some counterintuitive epistemic prescriptions, and to rethink what it is for cognitive agents to be epistemically responsible. (shrink)
The flight to reference is a widely-used strategy for resolving philosophical issues. The three steps in a flight to reference argument are: (1) offer a substantive account of the reference relation, (2) argue that a particular expression refers (or does not refer), and (3) draw a philosophical conclusion about something other than reference, like truth or ontology. It is our contention that whenever the flight to reference strategy is invoked, there is a crucial step that is left undefended, and that (...) without a defense of this step, the flight to reference is a fatally flawed strategy; it cannot succeed in resolving philosophical issues. In this paper we begin by setting out the flight to reference strategy and explaining what is wrong with arguments that invoke the strategy. We then illustrate the problem by considering arguments for and against eliminative materialism. In the final section we argue that much the same problem undermines Philip Kitcher's attempt to defend scientific realism. (shrink)
Standard Analytic Epistemology (SAE) names a contingently clustered class of methods and theses that have dominated English-speaking epistemology for about the past half-century. The major contemporary theories of SAE include versions of foundationalism, coherentism, reliabilism, and contextualism. While proponents of SAE don’t agree about how to define naturalized epistemology, most agree that a thoroughgoing naturalism in epistemology can’t work. For the purposes of this paper, we will suppose that a naturalistic theory of epistemology takes as its core, as its starting-point, (...) an empirical theory. The standard argument against naturalistic approaches to epistemology is that empirical theories are essentially descriptive, while epistemology is essentially prescriptive, and a descriptive theory cannot yield normative, evaluative prescriptions. In short, naturalistic theories cannot overcome the is-ought divide. Our main goal in this paper is to show that the standard argument against naturalized epistemology has it almost exactly backwards. (shrink)
Our aim in this paper is to bring the woefully neglected literature on predictive modeling to bear on some central questions in the philosophy of science. The lesson of this literature is straightforward: For a very wide range of prediction problems, statistical prediction rules (SPRs), often rules that are very easy to implement, make predictions than are as reliable as, and typically more reliable than, human experts. We will argue that the success of SPRs forces us to reconsider our views (...) about what is involved in understanding, explanation, good reasoning, and about how we ought to do philosophy of science. (shrink)
Why should a thought experiment, an experiment that only exists in people's minds, alter our fundamental beliefs about reality? After all, isn't reasoning from the imaginary to the real a sign of psychosis? A historical survey of how thought experiments have shaped our physical laws might lead one to believe that it's not the case that the laws of physics lie - it's that they don't even pretend to tell the truth. My aim in this paper is to defend an (...) account of thought experiments that fits smoothly into our understanding of the historical trajectory of actual thought experiments and that explains how any rational person could allow an imagined, unrealized (or unrealizable) situation to change their conception of the universe. (shrink)
Through a collection of original essays from leading philosophical scholars, _Stich and His Critics_ provides a thorough assessment of the key themes in the career of philosopher Stephen Stich. Provides a collection of original essays from some of the world's most distinguished philosophers Explores some of philosophy's most hotly-debated contemporary topics, including mental representation, theory of mind, nativism, moral philosophy, and naturalized epistemology.
In “Epistemology Naturalized” Quine famously suggests that epistemology, properly understood, “simply falls into place as a chapter of psychology and hence of natural science” (1969, 82). Since the appearance of Quine’s seminal article, virtually every epistemologist, including the later Quine (1986, 664), has repudiated the idea that a normative discipline like epistemology could be reduced to a purely descriptive discipline like psychology. Working epistemologists no longer take Quine’s vision in “Epistemology Naturalized” seriously. In this paper, I will explain why I (...) think this is a mistake. (shrink)
A heuristic is a rule of thumb. In psychology, heuristics are relatively simple rules for making judgments. A fast heuristic is easy to use and allows one to make judgments quickly. A frugal heuristic relies on a small fraction of the available evidence in making judgments. Typically, fast and frugal heuristics (FFHs) have, or are claimed to have, a further property: They are very reliable, yielding judgments that are about as accurate in the long run as ideal non-fast, non-frugal rules. (...) This paper introduces some well-known examples of FFHs, raises some objections to the FFH program, and looks at the implications of those parts of the FFH program about which we can have some reasonable degree of confidence. (shrink)
Scientific realism says of our best scientific theories that (1) most of their important posits exist and (2) most of their central claims are approximately true. Antirealists sometimes offer the pessimistic induction in reply: since (1) and (2) are false about past successful theories, they are probably false about our own best theories too. The contemporary debate about this argument has turned (and become stuck) on the question, Do the central terms of successful scientific theories refer? For example, Larry Laudan (...) offers a list of successful theories that employed central terms that failed to refer, and Philip Kitcher replies with a view about reference in which the central terms of such theories did sometimes refer. This article attempts to break this stalemate by proposing a direct version of the pessimistic induction, one that makes no explicit appeal to a substantive notion or theory of reference. While it is premature to say that this argument succeeds in showing that realism is probably false, the direct pessimistic induction is not subject to any kind of reference-based objection that might cripple a weaker, indirect version of the argument. Any attempt to trounce the direct pessimistic induction with a theory of reference fails. (shrink)
In this paper, I propose a novel approach to investigating the nature of well-being and a new theory about wellbeing. The approach is integrative and naturalistic. It holds that a theory of well-being should account for two different classes of evidence—our commonsense judgments about well-being and the science of well-being (i.e., positive psychology). The network theory holds that a person is in the state of well-being if she instantiates a homeostatically clustered network of feelings, emotions, attitudes, behaviors, traits, and interactions (...) with the world that tends to have a relatively high number of states that feel good, that lead to states that feel good, or that are valued by the agent or her culture. (shrink)
Social epistemology is autonomous: When applied to the same evidential situations, the principles of social rationality and the principles of individual rationality sometimes recommend inconsistent beliefs. If we stipulate that reasoning rationally from justified beliefs to a true belief is normally sufficient for knowledge, the autonomy thesis implies that some knowledge is essentially social. When the principles of social and individual rationality are applied to justified evidence and recommend inconsistent beliefs and the belief endorsed by social rationality is true, then (...) that true belief would be an instance of social knowledge but not individual knowledge. (shrink)
Strategic Reliabilism is a framework that yields relative epistemic evaluations of belief-producing cognitive processes. It is a theory of cognitive excellence, or more colloquially, a theory of reasoning excellence (where 'reasoning' is understood very broadly as any sort of cognitive process for coming to judgments or beliefs). First introduced in our book, Epistemology and the Psychology of Human Judgment (henceforth EPHJ), the basic idea behind SR is that epistemically excellent reasoning is efficient reasoning that leads in a robustly reliable fashion (...) to significant, true beliefs. It differs from most contemporary epistemological theories in two ways. First, it is not a theory of justification or knowledge – a theory of epistemically worthy belief. Strategic Reliabilism is a theory of epistemically worthy ways of forming beliefs. And second, Strategic Reliabilism does not attempt to account for an epistemological property that is assumed to be faithfully reflected in the epistemic judgments and intuitions of philosophers. If SR makes recommendations that accord with our reflective epistemic judgments and intuitions, great. If not, then so much the worse for our reflective epistemic judgments and intuitions. (shrink)
Alison Gopnik and Andrew Meltzoff have argued for a view they call the ‘theory theory’: theory change in science and children are similar. While their version of the theory theory has been criticized for depending on a number of disputed claims, we argue that there is a fundamental problem which is much more basic: the theory theory is multiply ambiguous. We show that it might be claiming that a similarity holds between theory change in children and (i) individual scientists, (ii) (...) a rational reconstruction of a Superscientist, or (iii) the scientific community. We argue that (i) is false, (ii) is non-empirical (which is problematic since the theory theory is supposed to be a bold empirical hypothesis), and (iii) is either false or doesn’t make enough sense to have a truth-value. We conclude that the theory theory is an interesting failure. Its failure points the way to a full, empirical picture of scientific development, one that marries a concern with the social dynamics of science to a psychological theory of scientific cognition. (shrink)
This paper reviews the recent (post-DSM) history of subjective and semi-structured methods of psychiatric diagnosis, as well as evidence for the superiority of structured and computer-aided diagnostic techniques. While there is evidence that certain forms of therapy are effective for alleviating the psychiatric suffering, distress, and dysfunction associated with certain psychiatric disorders, this paper addresses some of the difficult methodological and ethical challenges of evaluating the effectiveness of therapy.
The theory-ladenness of perception argument is not an argument at all. It is two clusters of arguments. The first cluster is empirical. These arguments typically begin with a discussion of one or more of the following psychological phenomena: (a) the conceptual penetrability of the visual system, (b) voluntary perceptual reversal of ambiguous figures, (c) adaptation to distorting lenses, or (d) expectation effects. From this evidence, proponents of theory-ladenness typically conclude that perception is in some sense "laden" with theory. The second (...) cluster attempts to extract deep epistemological lessons from this putative fact. Some philosophers conclude that science is not (in any traditional sense) a rational activity, while others conclude that we must radically reconceptualize what scientific rationality involves. Once we understand the structure of these arguments, much conventional wisdom about the significance of the psychological data turns out to be false. (shrink)
A theory of rationality is a theory that evaluates instances of reasoning as rational, irrational, or (ir)rational to some degree. Theories can be categorized as rule-based or consequentialist. Rule-based theories say that rational reasoning accords with certain rules (e.g., of logic or probability). Consequentialist theories say that rational reasoning tends to produce good consequences. For instance, the reliabilist takes rationality to be reasoning that tends to produce mostly true beliefs. The pragmatist takes it to be reasoning that tends to produce (...) mostly useful beliefs. This article reviews some of the features and the challenges of rule-based, reliabilist, and pragmatist theories of rationality. (shrink)
By making plausible the Diversity Thesis (different people have systematically different and incompatible packages of epistemic intuitions), experimental epistemology raises the specter of the shifting-sands problem: the evidence base for epistemology contains systematic inconsistencies. In response to this problem, some philosophers deny the Diversity Thesis, while others flirt with denying the Evidence Thesis (in normal circumstances, the epistemic intuition that p is prima facie evidence that p is true). We propose to accept both theses. The trick to living with the (...) shifting-sands problem is to expand epistemology’s evidential base so as to include scientific evidence. This evidence can provide principled grounds on which to decide between incompatible intuitions. The idea of resolving inconsistencies in an evidential base by adding more independent lines of evidence is commonplace in science. And in philosophy, it is simplyWide Reflective Equilibrium.We contend that the idea that epistemology would depend crucially on scientific evidence seems radical because many traditional epistemologists practice reflective equilibrium that is WINO, Wide In Name Only. We suggest five different lines of scientific evidence that can be, and have been, used in support of non-WINO epistemological theories. (shrink)
What factors are involved in the resolution of scientific disputes? What factors make the resolution of such disputes rational? The traditional view confers an important role on observation statements that are shared by proponents of competing theories. Rival theories make incompatible (sometimes contradictory) observational predictions about a particular situation, and the prediction made by one theory is borne out while the prediction made by the other is not. Paul Feyerabend, Thomas Kuhn, and Paul Churchland have called into question this account (...) of theory-resolution. According to these philosophers, substantially different and competing scientific theories are semantically incommensurable: those theories do not share a common observation language. Two charges have been leveled against the semantic incommensurability theories. The first is that it ignores that some semantic features of observational terms (e.g., their reference) can be expressed by proponents of competing theories. The second is that the semantic incommensurability thesis is self-defeating. In this paper I will argue that both of these charges are true but not for the reasons usually given. (shrink)
Philosophers investigate the nature of morality. And scientists study the moral judgments people make, the moral norms people enforce, and the systems of moral rules people embrace. What is the relationship between these investigations? The traditional philosophical view is that an unbreachable wall divides these activities. Philosophy and science investigate entirely separate domains. Philosophy investigates the normative realm – the true nature of morality. Science investigates the descriptive realm – how different people or groups of people think about morality and (...) how they deploy moral norms and rules. Just as it would be absurd to investigate the nature of the heavens by exploring how different people think about the heavens, it would be absurd to investigate the nature of morality by exploring how different people think about morality. (shrink)
There are simple rules for making important judgments that are more reliable than experts, but people refuse to use them People refuse even when they are told that these rules are more reliable than they are. When we say that people “refuse” to use the rule, we do not mean that people stubbornly refuse to carry out the steps indicated by the rule. Rather, people defect from the rule (i.e., they overturn the rule’s judgment) so often that they end up (...) reasoning about as reliably as they would have without the rule, and less reliably than the rule all by itself. We have two aims in this paper. First, we will explain why (at least some) simple rules are so reliable and why people too often defect from them. And second, we will argue that this selective defection phenomenon raises a serious problem for all epistemological theories of justification. We will suggest that the best way to escape this problem is to change the focus of contemporary epistemology. (shrink)
Reconnu depuis longtemps comme un des grands écrivains exemplaires de notre modernité, André du Bouchet nous lègue une oeuvre richement diversifiée, dense et transparente à la fois, transgénérique à bien des égards mais incontestablement poiétique dans sa conception et sa pratique. La présente étude cherche à privilégier les nombreux textes – essais, traductions, notes de carnet et autres accompagnements - où s'enlacent et s'entretissent une méditation critique profondément sentie, parfois obsessivement vécue, et une écriture poétique étonnamment originale visant à installer (...) naturellement, mais avec discrétion, ses propres spécificités, tout en creusant selon des angles d'approches très variés celles des grands auteurs et artistes constamment et librement interrogés. Ecriture d'altérité et de non-différence, d'automultiplication et d'harmonisation intersubjective, celle qu'on analyse ici – avec ses textes consacrés à Baudelaire ou Hugo, Tal-Coat ou Segers, Mandelstam ou Joyce, Poussin ou Hölderlin – ne cesse de révéler cet instinct de généreuse et fraternelle affinité qui, au coeur des brillantes explorations que voue le moi à son être-dans-le-monde, tisse son réseau de résonances subtiles et sûres. (shrink)
Normative apriorist philosophers of science build purely normative a priori reconstructions of science, whereas descriptive naturalists eliminate the normative elements of the philosophy of science in favor of purely descriptive endeavors. I hope to exhibit the virtues of an alternative approach that appreciates both the normative and the natural in the philosophy of science. ;Theory ladenness. Some philosophers claim that a plausible view about how our visual systems work either undermines or facilitates our ability to rationally adjudicate between competing theories (...) on the basis of a theory-neutral observation language. I argue that these psychological premises do not support the epistemological conclusions drawn. ;Scientific theories. I argue for a psychological plausibility constraint: An account of scientific theories should tell us how a theory is mentally represented. I tentatively advance an account that satisfies the constraint. Finally, I criticize the traditional view of theories and the semantic view of theories . ;Conceptual clarity. Philosophers often offer classical accounts of terms ; then others adduce alleged counterexamples. The success conditions on these accounts must include either preserving or revising the original term's extension. Given recent psychological theorizing, the probability that we can find an extension-preserving classical account of a term is very low. Furthermore, it provides no benefits over the empirical effort to find the non-classical conditions we actually use in applying our terms. If the aim of counterexample philosophy is to non-arbitrarily revise the extension of the original term, I argue that we should choose a particular account of a term on the basis of how it performs in our best available theory on the subject. ;Conclusion. I argue that normative apriorists unwittingly make defeasible empirical assumptions that, if false, would undermine their normative claims. Against descriptive naturalism I argue that the cost of ignoring normative issues is exorbitant. Finally, I defend a version of normative naturalism, a style of philosophy of science that is informed--but not engulfed--by empirical assumptions. (shrink)
J.D. Trout and I started this project in 2000. Our goal was to write a book that was interesting, opinionated, accessible, and fun to read. Here are some excerpts from the first two pages of chapter 1: Excerpts [pdf] . The cover photo is a still of the great Buster Keaton from his movie, The General.
Basic human rights are “necessary for a government to be relied upon to make itself more just over time”. Ultimately, Talbott grounds basic human rights in our “capacity for autonomy”. While he is prepared to grant that autonomy may be intrinsically valuable, his primary focus is showing how societies that protect autonomy by respecting basic human rights better promote their citizens’ well-being.
Semantic essentialism holds that any scientific term that appears in a well-confirmed scientific theory has a fixed kernel of meaning. Semantic essentialism cannot make sense of the strategies scientists use to argue for their views. Newton's central optical expression "light ray" suggests a context-sensitive view of scientific language. On different occasions, Newton's expression could refer to different things depending on his particular argumentative goals - a visible beam, an irreducibly smallest section of propagating light, or a traveling particle of light. (...) Essentialist views are too crude to account for the richness and subtleties present in actual episodes of scientific debate and theory-change. (shrink)