An enduring question in the philosophy of science is the question of whether a scientific theory deserves more credit for its successful predictions than it does for accommodating data that was already known when the theory was developed. In The Paradox of Predictivism, Eric Barnes argues that the successful prediction of evidence testifies to the general credibility of the predictor in a way that evidence does not when the evidence is used in the process of endorsing the theory. He illustrates (...) his argument with an important episode from nineteenth-century chemistry, Mendeleev's Periodic Law and its successful predictions of the existence of various elements. The consequences of this account of predictivism for the realist/anti-realist debate are considerable, and strengthen the status of the 'no miracle' argument for scientific realism. Barnes's important and original contribution to the debate will interest a wide range of readers in philosophy of science. (shrink)
Philip Kitcher has proposed a theory of explanation based on the notion of unification. Despite the genuine interest and power of the theory, I argue here that the theory suffers from a fatal deficiency: It is intrinsically unable to account for the asymmetric structure of explanation, and thus ultimately falls prey to a problem similar to the one which beset Hempel's D-N model. I conclude that Kitcher is wrong to claim that one can settle the issue of an argument's explanatory (...) force merely on the basis of considerations about the unifying power of the argument pattern the argument instantiates. (shrink)
An enduring question in the philosophy of science is the question of whether a scientific theory deserves more credit for its successful predictions than it does for accommodating data that was already known when the theory was developed. In The Paradox of Predictivism, Eric Barnes argues that the successful prediction of evidence testifies to the general credibility of the predictor in a way that evidence does not when the evidence is used in the process of endorsing the theory. He illustrates (...) his argument with an important episode from nineteenth-century chemistry, Mendeleev's Periodic Law and its successful predictions of the existence of various elements. The consequences of this account of predictivism for the realist/anti-realist debate are considerable, and strengthen the status of the 'no miracle' argument for scientific realism. Barnes's important and original contribution to the debate will interest a wide range of readers in philosophy of science. (shrink)
The theory of explanatory unification was first proposed by Friedman (1974) and developed by Kitcher (1981, 1989). The primary motivation for this theory, it seems to me, is the argument that this account of explanation is the only account that correctly describes the genesis of scientific understanding. Despite the apparent plausibility of Friedman's argument to this effect, however, I argue here that the unificationist thesis of understanding is false. The theory of explanatory unification as articulated by Friedman and Kitcher thus (...) emerges as fundamentally misconceived. (shrink)
Explaining Brute Facts.Eric Barnes - 1994 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994:61-68.details
I aim to show that one way of testing the mettle of a theory of scientific explanation is to inquire what that theory entails about the status of brute facts. Here I consider the nature of brute facts, and survey several contemporary accounts of explanation vis a vis this subject. One problem with these accounts is that they seem to entail that brute facts represent a gap in scientific understanding. I argue that brute facts are non-mysterious and indeed are even (...) explainable by the lights of Salmon's ontic conception of explanation. The plausibility of various models of explanation, I suggest, depends to some extent on the tendency of their proponents to focus on certain examples of explananda - I ponder brute facts qua explananda here as a way of helping us to recognize this dependency. (shrink)
In this paper I develop a theory of contrastive why questions that establishes under what conditions it is sensible to ask "why p rather than q?". p and q must be outcomes of a single type of causal process.
The miracle argument for scientific realism can be cast in two forms: according to the miraculous theory argument, realism is the only position which does not make the empirical successes of particular theories miraculous. According to the miraculous choice argument, realism is the only position which does not render the fact that empirically successful theories have been chosen a miracle. A vast literature discusses the miraculous theory argument, but the miraculous choice argument has been unjustifiably neglected. I raise two objections (...) to Richard Boyd's defense of the latter: (1) we have no miracle free account of the emergence of take-off theories and (2) the anti-realist can account for the non-miraculous choice of empirically successful theories by attributing mere empirical adequacy to background theory. I argue that the availability of extra-empirical criteria that are arguably truth conductive but not theory-laden suffices to answer (1), and the unavailability of extra-empirical criteria that are conductive to empirical adequacy but not necessarily to truth (and are also not theory-laden) constitutes to reply to (2). The prospects for a realist victory are at least somewhat promising, on a controversial assumption about the rate at which empirically successful theories emerge. (shrink)
Some compatibilists have responded to the manipulation argument for incompatibilism by proposing an historical theory of moral responsibility which, according to one version, requires that agents be morally responsible for having their pro-attitudes if they are to be morally responsible for acting on them. This proposal, however, leads obviously to an infinite regress problem. I consider a proposal by Haji and Cuypers that addresses this problem and argue that it is unsatisfactory. I then go on to propose a new solution (...) inspired by the libertarian theory of Robert Kane. (shrink)
This paper proposes a solution to David Miller's Minnesotan-Arizonan demonstration of the language dependence of truthlikeness (Miller 1974), along with Miller's first-order demonstration of the same (Miller 1978). It is assumed, with Peter Urbach, that the implication of these demonstrations is that the very notion of truthlikeness is intrinsically language dependent and thus non-objective. As such, truthlikeness cannot supply a basis for an objective account of scientific progress. I argue that, while Miller is correct in arguing that the number of (...) true atomic sentences of a false theory is language dependent, the number of known sentences (under certain straightforward assumptions) is conserved by translation; degree of knowledge, unlike truthlikeness, is thus a linguistically invariant notion. It is concluded that the objectivity of scientific progress must be grounded on the fact (noted in Cohen 1980) that knowledge, not mere truth, is the aim of science. (shrink)
In The Paradox of Predictivism I tried to demonstrate that there is an intimate relationship between predictivism and epistemic pluralism. Here I respond to various published criticisms of some of the key points from Paradox from David Harker, Jarret Leplin, and Clark Glymour. Foci include my account of predictive novelty, the claim that predictivism has two roots, the prediction per se and predictive success, and my account of why Mendeleev’s predictions carried special weight in confirming the Periodic Law of the (...) Elements. (shrink)
Predictivism asserts that novel confirmations carry special probative weight. Epistemic pluralism asserts that the judgments of agents (about, e.g., the probabilities of theories) carry epistemic import. In this paper, I propose a new theory of predictivism that is tailored to pluralistic evaluators of theories. I replace the orthodox notion of use-novelty with a notion of endorsement-novelty, and argue that the intuition that predictivism is true has two roots. I provide a detailed Bayesian rendering of this theory and argue that pluralistic (...) theory evaluation pervades scientific practice. I compare my account of predictivism with those of Maher and Worrall. Introduction Why construction is a red herring for pluralist evaluators The unvirtuous accommodator Virtuous endorsers and the two roots of predictivism The two roots in Bayesian terms: the priors and background beliefs of endorsers Who are the pluralist evaluators? Two contemporary theories of predictivism 7.1 Maher: Reliable methods of theory construction 7.2 Worrall: The confirmation of core ideas Conclusion. (shrink)
Predictivism asserts that where evidence E confirms theory T, E provides stronger support for T when E is predicted on the basis of T and then confirmed than when E is known before T's construction and 'used', in some sense, in the construction of T. Among the most interesting attempts to argue that predictivism is a true thesis (under certain conditions) is that of Patrick Maher (1988, 1990, 1993). The purpose of this paper is to investigate the nature of predictivism (...) using Maher's analysis as a starting point. I briefly summarize Maher's primary argument and expand upon it; I explore related issues pertaining to the causal structure of empirical domains and the logic of discovery. (shrink)
Sherrilyn Roush's Tracking Truth provides a sustained and ambitious development of the basic idea that knowledge is true belief that tracks the truth. In this essay, I provide a quick synopsis of Roush's book and offer a substantive discussion of her analysis of scientific evidence. Roush argues that, for e to serve as evidence for h, it should be easier to determine the truth value of e than it is to determine the truth value of h, an ideal she refers (...) to as ‘leverage’. She defends a detailed method by which the value of p is computed without ‘direct’ information about p but only using evidence about the value of p, from which the value of p is derived. She presents an example of how to use her leverage method, which I argue involves a certain critical mistake. I show how the leveraging method can be used in a way that is sound—I conclude with a few remarks about the importance of distinguishing clearly between prior and posterior probabilities. (shrink)
A pluralistic scientific method is one that incorporates a variety of points of view in scientific inquiry. This paper investigates one example of pluralistic method: the use of weighted averaging in probability estimation. I consider two methods of weight determination, one based on disjoint evidence possession and the other on track record. I argue that weighted averaging provides a rational procedure for probability estimation under certain conditions. I consider a strategy for calculating ‘mixed weights’ which incorporate mixed information about agent (...) credibility. I address various objections to the weighted averaging technique and conclude that the technique is a promising one in various respects. (shrink)
This paper critically responds to Tim Maudlin's argument against a computational theory of consciousness. It is argued that his artfully constructed Turing machine 'Olympia' does not meet an important condition for computation, namely that the computed input serve as an active cause of the computational activity. Thus a computational theory of consciousness remains a live option.
Some proponents of compatibilist moral responsibility have proposed an historical theory which requires that agents deploy character control in order to be morally responsible. An important type of argument for the character control condition is the manipulation argument, such as Mele’s example of Beth and Chuck. In this paper I show that Beth can be exonerated on various conditions other than her failure to execute character control—I propose a new character, Patty, who meets these conditions and is, I argue, morally (...) responsible for her actions despite lacking character control. Thus the character control condition is unmotivated. I suggest there may be an alternative basis for an historical theory of moral responsibility nonetheless. (shrink)
David Miller has demonstrated to the satisfaction of a variety of philosophers that the accuracy of false quantitative theories is language dependent (cf. Miller 1975). This demonstration renders the accuracy-based mode of comparison for such theories obsolete. The purpose of this essay is to supply an alternate basis for theory comparison which in this paper is deemed the knowledge-based mode of quantitative theory comparison. It is argued that the status of a quantitative theory as knowledge depends primarily on the soundness (...) of the measurement procedure which produced the theory; such soundness is invariant, on my view, under Milleresque translations. This point is the basis for the linguistic invariance of knowledgelikeness. When the aim of science is not construed simply in terms of the truthlikeness or accuracy of theories, but in terms of the knowledge such theories embody, Miller's language dependence problem is overcome. One result of this analysis is that the possibility of objective scientific progress is restored, a possibility that Miller's analysis has prima facie defeated. (shrink)
Predictivism holds that, where evidence E confirms theory T, E confirms T more strongly when E is predicted on the basis of T and subsequently confirmed than when E is known in advance of T's formulation and used, in some sense, in the formulation of T. Predictivism has lately enjoyed some strong supporting arguments from Maher (1988, 1990, 1993) and Kahn, Landsberg, and Stockman (1992). Despite the many virtues of the analyses these authors provide it is my view that they (...) (along with all other authors on this subject) have failed to understand a fundamental truth about predictivism: the existence of a scientist who predicted T prior to the establishment that E is true has epistemic import for T (once E is established) only in connection with information regarding the social milieu in which the T-predictor is located and information regarding how the T-predictor was located. The aim of this paper is to show that predictivism is ultimately a social phenomenon that requires a social level of analysis, a thesis I deem social predictivism. (shrink)
D. Miller's demonstrations of the language dependence of truthlikeness raise a profound problem for the claim that scientific progress is objective. In two recent papers (Barnes 1990, 1991) I argue that the objectivity of progress may be grounded on the claim that the aim of science is not merely truth but knowledge; progress thus construed is objective in an epistemic sense. In this paper I construct a new solution to Miller's problem grounded on the notion of "approximate causal explanation" which (...) allows for linguistically invariant progress outside an epistemic context. I suggest that the notion of "approximate causal explanation" provides the resources for a more robust theory of progress than that provided by the notion of "approximate truth.". (shrink)
In the 1970's a problem arose for the viability of Popper's truthlikeness project. The problem, in short, was that all plausible measures of the truthlikeness of scientific theories were language dependent. This dissertation is primarily concerned to provide a substitute notion that can do the work 'verisimilitude' was intended to do without suffering from linguistic relativity. It is argued that the notion of 'knowledge', or 'knowledgelikeness', can suffice in this regard. ;Chapter One seeks to convince the reader that the notion (...) of 'truthlikeness' is intrinsically non-objective; all attempts to salvage the objectivity of truthlikeness have failed. Chapters Two, Three, and Four argue that the various language dependence arguments can be overcome by adopting a knowledge-based approach. Chapter Five attempts to diagnose the underlying cause behind the failure of the verisimilitude project. (shrink)
The problem of dirty hands concerns the apparently inevitable need for effective politicians to do what is ethically wrong. This essay discusses a related problem in democratic elections of politicians being unwilling to commit themselves to precise positions on controversial policy issues. Given certain plausible assumptions, I demonstrate using a simple game theoretic model that there is an incentive structure for political candidates that is damaging to the public good. I contrast this problem with the classic prisoner’s dilemma and then (...) go on to discuss some possible strategies for overcoming this problem by an improved system of political debates. (shrink)