Bishop and Trout here present a unique and provocative new approach to epistemology. Their approach aims to liberate epistemology from the scholastic debates of standard analytic epistemology, and treat it as a branch of the philosophy of science. The approach is novel in its use of cost-benefit analysis to guide people facing real reasoning problems and in its framework for resolving normative disputes in psychology. Based on empirical data, Bishop and Trout show how people can improve their reasoning by relying (...) on Statistical Prediction Rules. They then develop and articulate the positive core of the book. Their view, Strategic Reliabilism, claims that epistemic excellence consists in the efficient allocation of cognitive resources to reliable reasoning strategies, applied to significant problems. The last third of the book develops the implications of this view for standard analytic epistemology; for resolving normative disputes in psychology; and for offering practical, concrete advice on how this theory can improve real people's reasoning. This is a truly distinctive and controversial work that spans many disciplines and will speak to an unusually diverse group, including people in epistemology, philosophy of science, decision theory, cognitive and clinical psychology, and ethics and public policy. (shrink)
Science and philosophy study well-being with different but complementary methods. Marry these methods and a new picture emerges: To have well-being is to be "stuck" in a positive cycle of emotions, attitudes, traits and success. This book unites the scientific and philosophical worldviews into a powerful new theory of well-being.
The generality problem is widely considered to be a devastating objection to reliabilist theories of justification. My goal in this paper is to argue that a version of the generality problem applies to all plausible theories of justification. Assume that any plausible theory must allow for the possibility of reflective justification—S's belief, B, is justified on the basis of S's knowledge that she arrived at B as a result of a highly (but not perfectly) reliable way of reasoning, R. The (...) generality problem applies to all cases of reflective justification: Given that is the product of a process-token that is an instance of indefinitely many belief-forming process-types (or BFPTs), why is the reliability of R, rather than the reliability of one of the indefinitely many other BFPTs, relevant to B's justificatory status? This form of the generality problem is restricted because it applies only to cases of reflective justification. But unless it is solved, the generality problem haunts all plausible theories of justification, not just reliabilist ones. (shrink)
Are thought experiments nothing but arguments? I argue that it is not possible to make sense of the historical trajectory of certain thought experiments if one takes them to be arguments. Einstein and Bohr disagreed about the outcome of the clock-in-the-box thought experiment, and so they reconstructed it using different arguments. This is to be expected whenever scientists disagree about a thought experiment's outcome. Since any such episode consists of two arguments but just one thought experiment, the thought experiment cannot (...) be the arguments. (shrink)
Epistemic responsibility involves at least two central ideas. (V) To be epistemically responsible is to display the virtue(s) epistemic internalists take to be central to justification (e.g., coherence, having good reasons, fitting the evidence). (C) In normal (non-skeptical)circumstances and in thelong run, epistemic responsibility is strongly positively correlated with reliability. Sections 1 and 2 review evidence showing that for a wide range of real-world problems, the most reliable, tractable reasoning strategies audaciously flout the internalist''s epistemic virtues. In Section 3, I (...) argue that these results force us to give up either (V), our current conception of what it is to be epistemically responsible, or (C) the responsibility-reliability connection. I will argue that we should relinquish (V). This is likely to reshape our epistemic practices. It will force us to alter our epistemic judgments about certain instances of reasoning, to endorse some counterintuitive epistemic prescriptions, and to rethink what it is for cognitive agents to be epistemically responsible. (shrink)
The flight to reference is a widely-used strategy for resolving philosophical issues. The three steps in a flight to reference argument are: (1) offer a substantive account of the reference relation, (2) argue that a particular expression refers (or does not refer), and (3) draw a philosophical conclusion about something other than reference, like truth or ontology. It is our contention that whenever the flight to reference strategy is invoked, there is a crucial step that is left undefended, and that (...) without a defense of this step, the flight to reference is a fatally flawed strategy; it cannot succeed in resolving philosophical issues. In this paper we begin by setting out the flight to reference strategy and explaining what is wrong with arguments that invoke the strategy. We then illustrate the problem by considering arguments for and against eliminative materialism. In the final section we argue that much the same problem undermines Philip Kitcher's attempt to defend scientific realism. (shrink)
Our aim in this paper is to bring the woefully neglected literature on predictive modeling to bear on some central questions in the philosophy of science. The lesson of this literature is straightforward: For a very wide range of prediction problems, statistical prediction rules (SPRs), often rules that are very easy to implement, make predictions than are as reliable as, and typically more reliable than, human experts. We will argue that the success of SPRs forces us to reconsider our views (...) about what is involved in understanding, explanation, good reasoning, and about how we ought to do philosophy of science. (shrink)
A heuristic is a rule of thumb. In psychology, heuristics are relatively simple rules for making judgments. A fast heuristic is easy to use and allows one to make judgments quickly. A frugal heuristic relies on a small fraction of the available evidence in making judgments. Typically, fast and frugal heuristics (FFHs) have, or are claimed to have, a further property: They are very reliable, yielding judgments that are about as accurate in the long run as ideal non-fast, non-frugal rules. (...) This paper introduces some well-known examples of FFHs, raises some objections to the FFH program, and looks at the implications of those parts of the FFH program about which we can have some reasonable degree of confidence. (shrink)
Scientific realism says of our best scientific theories that (1) most of their important posits exist and (2) most of their central claims are approximately true. Antirealists sometimes offer the pessimistic induction in reply: since (1) and (2) are false about past successful theories, they are probably false about our own best theories too. The contemporary debate about this argument has turned (and become stuck) on the question, Do the central terms of successful scientific theories refer? For example, Larry Laudan (...) offers a list of successful theories that employed central terms that failed to refer, and Philip Kitcher replies with a view about reference in which the central terms of such theories did sometimes refer. This article attempts to break this stalemate by proposing a direct version of the pessimistic induction, one that makes no explicit appeal to a substantive notion or theory of reference. While it is premature to say that this argument succeeds in showing that realism is probably false, the direct pessimistic induction is not subject to any kind of reference-based objection that might cripple a weaker, indirect version of the argument. Any attempt to trounce the direct pessimistic induction with a theory of reference fails. (shrink)
Strategic Reliabilism is a framework that yields relative epistemic evaluations of belief-producing cognitive processes. It is a theory of cognitive excellence, or more colloquially, a theory of reasoning excellence (where 'reasoning' is understood very broadly as any sort of cognitive process for coming to judgments or beliefs). First introduced in our book, Epistemology and the Psychology of Human Judgment (henceforth EPHJ), the basic idea behind SR is that epistemically excellent reasoning is efficient reasoning that leads in a robustly reliable fashion (...) to significant, true beliefs. It differs from most contemporary epistemological theories in two ways. First, it is not a theory of justification or knowledge – a theory of epistemically worthy belief. Strategic Reliabilism is a theory of epistemically worthy ways of forming beliefs. And second, Strategic Reliabilism does not attempt to account for an epistemological property that is assumed to be faithfully reflected in the epistemic judgments and intuitions of philosophers. If SR makes recommendations that accord with our reflective epistemic judgments and intuitions, great. If not, then so much the worse for our reflective epistemic judgments and intuitions. (shrink)
Social epistemology is autonomous: When applied to the same evidential situations, the principles of social rationality and the principles of individual rationality sometimes recommend inconsistent beliefs. If we stipulate that reasoning rationally from justified beliefs to a true belief is normally sufficient for knowledge, the autonomy thesis implies that some knowledge is essentially social. When the principles of social and individual rationality are applied to justified evidence and recommend inconsistent beliefs and the belief endorsed by social rationality is true, then (...) that true belief would be an instance of social knowledge but not individual knowledge. (shrink)
Alison Gopnik and Andrew Meltzoff have argued for a view they call the ‘theory theory’: theory change in science and children are similar. While their version of the theory theory has been criticized for depending on a number of disputed claims, we argue that there is a fundamental problem which is much more basic: the theory theory is multiply ambiguous. We show that it might be claiming that a similarity holds between theory change in children and (i) individual scientists, (ii) (...) a rational reconstruction of a Superscientist, or (iii) the scientific community. We argue that (i) is false, (ii) is non-empirical (which is problematic since the theory theory is supposed to be a bold empirical hypothesis), and (iii) is either false or doesn’t make enough sense to have a truth-value. We conclude that the theory theory is an interesting failure. Its failure points the way to a full, empirical picture of scientific development, one that marries a concern with the social dynamics of science to a psychological theory of scientific cognition. (shrink)
The theory-ladenness of perception argument is not an argument at all. It is two clusters of arguments. The first cluster is empirical. These arguments typically begin with a discussion of one or more of the following psychological phenomena: (a) the conceptual penetrability of the visual system, (b) voluntary perceptual reversal of ambiguous figures, (c) adaptation to distorting lenses, or (d) expectation effects. From this evidence, proponents of theory-ladenness typically conclude that perception is in some sense "laden" with theory. The second (...) cluster attempts to extract deep epistemological lessons from this putative fact. Some philosophers conclude that science is not (in any traditional sense) a rational activity, while others conclude that we must radically reconceptualize what scientific rationality involves. Once we understand the structure of these arguments, much conventional wisdom about the significance of the psychological data turns out to be false. (shrink)
What factors are involved in the resolution of scientific disputes? What factors make the resolution of such disputes rational? The traditional view confers an important role on observation statements that are shared by proponents of competing theories. Rival theories make incompatible (sometimes contradictory) observational predictions about a particular situation, and the prediction made by one theory is borne out while the prediction made by the other is not. Paul Feyerabend, Thomas Kuhn, and Paul Churchland have called into question this account (...) of theory-resolution. According to these philosophers, substantially different and competing scientific theories are semantically incommensurable: those theories do not share a common observation language. Two charges have been leveled against the semantic incommensurability theories. The first is that it ignores that some semantic features of observational terms (e.g., their reference) can be expressed by proponents of competing theories. The second is that the semantic incommensurability thesis is self-defeating. In this paper I will argue that both of these charges are true but not for the reasons usually given. (shrink)
Normative apriorist philosophers of science build purely normative a priori reconstructions of science, whereas descriptive naturalists eliminate the normative elements of the philosophy of science in favor of purely descriptive endeavors. I hope to exhibit the virtues of an alternative approach that appreciates both the normative and the natural in the philosophy of science. ;Theory ladenness. Some philosophers claim that a plausible view about how our visual systems work either undermines or facilitates our ability to rationally adjudicate between competing theories (...) on the basis of a theory-neutral observation language. I argue that these psychological premises do not support the epistemological conclusions drawn. ;Scientific theories. I argue for a psychological plausibility constraint: An account of scientific theories should tell us how a theory is mentally represented. I tentatively advance an account that satisfies the constraint. Finally, I criticize the traditional view of theories and the semantic view of theories . ;Conceptual clarity. Philosophers often offer classical accounts of terms ; then others adduce alleged counterexamples. The success conditions on these accounts must include either preserving or revising the original term's extension. Given recent psychological theorizing, the probability that we can find an extension-preserving classical account of a term is very low. Furthermore, it provides no benefits over the empirical effort to find the non-classical conditions we actually use in applying our terms. If the aim of counterexample philosophy is to non-arbitrarily revise the extension of the original term, I argue that we should choose a particular account of a term on the basis of how it performs in our best available theory on the subject. ;Conclusion. I argue that normative apriorists unwittingly make defeasible empirical assumptions that, if false, would undermine their normative claims. Against descriptive naturalism I argue that the cost of ignoring normative issues is exorbitant. Finally, I defend a version of normative naturalism, a style of philosophy of science that is informed--but not engulfed--by empirical assumptions. (shrink)
Basic human rights are “necessary for a government to be relied upon to make itself more just over time”. Ultimately, Talbott grounds basic human rights in our “capacity for autonomy”. While he is prepared to grant that autonomy may be intrinsically valuable, his primary focus is showing how societies that protect autonomy by respecting basic human rights better promote their citizens’ well-being.