What attitude should we take toward a scientific theory when it competes with other scientific theories? This question elicited different answers from instrumentalists, logical positivists, constructive empiricists, scientific realists, holists, theory-ladenists, antidivisionists, falsificationists, and anarchists in the philosophy of science literature. I will summarize the diverse philosophical responses to the problem of underdetermination, and argue that there are different kinds of underdetermination, and that they should be kept apart from each other because they call for different responses.
I focus on a key argument for global external world scepticism resting on the underdetermination thesis: the argument according to which we cannot know any proposition about our physical environment because sense evidence for it equally justifies some sceptical alternative (e.g. the Cartesian demon conjecture). I contend that the underdetermination argument can go through only if the controversial thesis that conceivability is per se a source of evidence for metaphysical possibility is true. I also suggest a reason to (...) doubt that conceivability is per se a source of evidence for metaphysical possibility, and thus to doubt the underdetermination argument. (shrink)
One of the objections against the thesis of underdetermination of theories by observations is that it is unintelligible. Any two empirically equivalent theories — so the argument goes—are in principle intertranslatable, hence cannot count as rivals in any non-trivial sense. Against that objection, this paper shows that empirically equivalent theories may contain theoretical sentences that are not intertranslatable. Examples are drawn from a related discussion about incommensurability that shows that theoretical non-intertranslatability is possible.
In this paper, I will show that the Miracle Argument is unsound if one assumes a certain form of transient underdetermination. For this aim, I will first discuss and formalize several variants of underdetermination, especially that of transient underdetermination, by means of measure theory. I will then formalize a popular and persuasive form of the Miracle Argument that is based on "use novelty". I will then proceed to the proof that the miracle argument is unsound by means (...) of a mathematical example. Finally, I will expose two hidden presuppositions of the Miracle Argument that make it so immensely though deceptively persuasive. (shrink)
Thomas Bonk has dedicated a book to analyzing the thesis of underdetermination of scientific theories, with a chapter exclusively devoted to the analysis of the relation between this idea and the indeterminacy of meaning. Both theses caused a revolution in the philosophic world in the sixties, generating a cascade of articles and doctoral theses. Agitation seems to have cooled down, but the point is still debated and it may be experiencing a renewed resurgence.
It is argued that, contrary to prevailing opinion, Bas van Fraassen nowhere uses the argument from underdetermination in his argument for constructive empiricism. It is explained that van Fraassen’s use of the notion of empirical equivalence in The Scientific Image has been widely misunderstood. A reconstruction of the main arguments for constructive empiricism is offered, showing how the passages that have been taken to be part of an appeal to the argument from underdetermination should actually be interpreted.
Various forms of underdetermination that might threaten the realist stance are examined. That which holds between different 'formulations' of a theory (such as the Hamiltonian and Lagrangian formulations of classical mechanics) is considered in some detail, as is the 'metaphysical' underdetermination invoked to support 'ontic structural realism'. The problematic roles of heuristic fruitfulness and surplus structure in attempts to break these forms of underdetermination are discussed and an approach emphasizing the relevant structural commonalities is defended.
If cosmology is to obtain knowledge about the whole universe, it faces an underdetermination problem: Alternative space-time models are compatible with our evidence. The problem can be avoided though, if there are good reasons to adopt the Cosmological Principle (CP), because, assuming the principle, one can confine oneself to the small class of homogeneous and isotropic space-time models. The aim of this paper is to ask whether there are good reasons to adopt the Cosmological Principle in order to avoid (...)underdetermination in cosmology. Various strategies to justify the CP are examined. For instance, arguments to the effect that the truth of the CP follows generically from a large set of initial conditions; an inference to the best explanation; and an inductive strategy are assessed. I conclude that a convincing justification of the CP has not yet been established, but this claim is contingent on a number of results that may have to be revised in the future. (shrink)
Structural realism is sometimes said to undermine the theory underdetermination (TUD) argument against realism, since, in usual TUD scenarios, the supposed underdetermination concerns the object-like theoretical content but not the structural content. The paper explores the possibility of structural TUD by considering some special cases from modern physics, but also questions the validity of the TUD argument itself. The upshot is that cases of structural TUD cannot be excluded, but that TUD is perhaps not such a terribly serious (...) anti-realistic argument. (shrink)
Duhem—Quine underdetermination plays a constructive role in epistemology by pinpointing the impact of non-empirical virtues or cognitive values on theory choice. Underdetermination thus contributes to illuminating the nature of scientific rationality. Scientists prefer and accept one account among empirical equivalent alternatives. The non-empirical virtues operating in science are laid open in such theory choice decisions. The latter act as an epistemological test tube in making explicit commitments to how scientific knowledge should be like.
This paper pursues Ernan McMullin‘s claim ("Virtues of a Good Theory" and related papers on theory-choice) that talk of theory virtues exposes a fault-line in philosophy of science separating "very different visions" of scientific theorizing. It argues that connections between theory virtues and virtue epistemology are substantive rather than ornamental, since both address underdetermination problems in science, helping us to understand the objectivity of theory choice and more specifically what I term the ampliative adequacy of scientific theories. The paper (...) argues therefore that virtue epistemologies can make substantial contributions to the epistemology and methodology of the sciences, helping to bridge the gulf between realists and anti-realists, and to re-enforce moderation over claims about the implications of underdetermination problems for scientific inquiry. It finally makes and develops the suggestion that virtue epistemologies, at least of the kind developed here, offer support to the position that philosophers of science know as normative naturalism. (shrink)
As climate policy decisions are decisions under uncertainty, being based on a range of future climate change scenarios, it becomes a crucial question how to set up this scenario range. Failing to comply with the precautionary principle, the scenario methodology widely used in the Third Assessment Report of the International Panel on Climate Change (IPCC) seems to violate international environmental law, in particular a provision of the United Nations Framework Convention on Climate Change. To place climate policy advice on a (...) sound methodological basis would imply that climate simulations which are based on complex climate models had, in stark contrast to their current hegemony, hardly an epistemic role to play in climate scenario analysis at all. Their main function might actually consist in ‘foreseeing future ozone-holes’. In order to argue for these theses, I explain first of all the plurality of climate models used in climate science by the failure to avoid the problem of underdetermination. As a consequence, climate simulation results have to be interpreted as modal sentences, stating what is possibly true of our climate system. This indicates that climate policy decisions are decisions under uncertainty. Two general methodological principles which may guide the construction of the scenario range are formulated and contrasted with each other: modal inductivism and modal falsificationism. I argue that modal inductivism, being the methodology implicitly underlying the third IPCC report, is severely flawed. Modal falsificationism, representing the sound alternative, would in turn require an overhaul of the IPCC practice. (shrink)
Thick terms and concepts in ethics (e.g. selfish, cruel and courageous) somehow combine evaluation and non-evaluative description. The non-evaluative aspects of thick terms and concepts underdetermine their extensions. Many writers argue that this underdetermination point is best explained by supposing that thick terms and concepts are semantically evaluative in some way such that evaluation plays a role in determining their extensions. This paper argues that the extensions of thick terms and concepts are underdetermined by their meanings in toto, irrespective (...) of whether their extensions are partly determined by evaluation; the underdetermination point can therefore be explained without supposing that thick terms and concepts are semantically evaluative. My argument applies general points about semantic gradability and context-sensitivity to the semantics of thick terms and concepts. (shrink)
The thesis that the practice and evaluation of science requires social value-judgment, that good science is not value-free or value-neutral but value-laden, has been gaining acceptance among philosophers of science. The main proponents of the value-ladenness of science rely on either arguments from the underdetermination of theory by evidence or arguments from inductive risk. Both arguments share the premise that we should only consider values once the evidence runs out, or where it leaves uncertainty; they adopt a criterion of (...) lexical priority of evidence over values. The motivation behind lexical priority is to avoid reaching conclusions on the basis of wishful thinking rather than good evidence. The problem of wishful thinking is indeed real---it would be an egregious error to adopt beliefs about the world because they comport with how one would prefer the world to be. I will argue, however, that giving lexical priority to evidential considerations over values is a mistake, and unnecessary for adequately avoiding the problem of wishful thinking. Values have a deeper role to play in science than proponents of the underdetermination and inductive risk arguments have suggested. (shrink)
In this paper, I argue (i) that there are certain methodological practices that are epistemically significant, and (ii) that we can test for the success of these practices empirically by examining case-studies in the history of science. Analysing a particular episode from the history of medicine, I explain how this can help us resolve specific cases of underdetermination. I conclude that, while the anti-realist is (more or less legitimately) able to construct underdetermination scenarios on a case-by-case basis, he (...) will have to abandon the strategy of using algorithms to do so, thus losing the much needed guarantee that there will always be rival cases of the required kind. (shrink)
Anthony Brueckner has argued that claims about underdetermination of evidence are suppressed in closure-based scepticism (“The Structure of the Skeptical Argument”, Philosophy and Phenomenological Research 54:4, 1994). He also argues that these claims about underdetermination themselves lead to a paradoxical sceptical argument—the underdetermination argument—which is more fundamental than the closure argument. If Brueckner is right, the status quo focus of some predominant anti-sceptical strategies may be misguided. In this paper I focus specifically on the relationship between these (...) two arguments. I provide support for Brueckner’s claim that the underdetermination argument is the more fundamental sceptical argument. I do so by responding to a challenge to this claim put forward by Stewart Cohen (“Two Kinds of Skeptical Argument”, Philosophy and Phenomenological Research 58:1, 1998). Cohen invokes an alternative epistemic principle which he thinks can be used to challenge Brueckner. Cohen’s principle raises interesting questions about the relationship between evidential considerations and explanatory considerations in the context of scepticism about our knowledge of the external world. I explore these questions in my defence of Brueckner. (shrink)
There are results which show that measure-theoretic deterministic models and stochastic models are observationally equivalent. Thus there is a choice between a deterministic and an indeterministic model and the question arises: Which model is preferable relative to evidence? If the evidence equally supports both models, there is underdetermination. This paper first distinguishes between different kinds of choice and clarifies the possible resulting types of underdetermination. Then a new answer is presented: the focus is on the choice between a (...) Newtonian deterministic model supported by indirect evidence from other Newtonian models which invoke similar additional assumptions about the physical systems and a stochastic model that is not supported by indirect evidence. It is argued that the deterministic model is preferable. The argument against underdetermination is then generalised to a broader class of cases. Finally, the paper criticises the extant philosophical answers in relation to the preferable model. Winnie’s (1998) argument for the deterministic model is shown to deliver the correct conclusion relative to observations which are possible in principle and where there are no limits, in principle, on observational accuracy (the type of choice Winnie was concerned with). However, in practice the argument fails. A further point made is that Hoefer’s (2008) argument for the deterministic model is untenable. (shrink)
This paper examines the epistemological significance of the present situation of underdetermination in quantum mechanics. After analyzing this underdetermination at three levels---formal, ontological, and methodological---the paper considers implications for a number of variants of the thesis of scientific realism in fundamental physics and reassesses Lakatos‘ characterization of progress in physical theory in light of the present situation. Next, this paper considers the implications of underdetermination for Weinberg’s ‘‘dream of a final theory.’’ Finally, the paper concludes by suggesting (...) how one might still think of realism and progress in fundamental physics despite the possibility of persistent underdetermination in quantum mechanics. (shrink)
Are theories 'underdetermined by the evidence' in any way that should worry the scientific realist? I argue that no convincing reason has been given for thinking so. A crucial distinction is drawn between data equivalence and empirical equivalence. Duhem showed that it is always possible to produce a data equivalent rival to any accepted scientific theory. But there is no reason to regard such a rival as equally well empirically supported and hence no threat to realism. Two theories are empirically (...) equivalent if they share all consequences expressed in purely observational vocabulary. This is a much stronger requirement than has hitherto been recognised—two such 'rival' theories must in fact agree on many claims that are clearly theoretical in nature. Given this, it is unclear how much of an impact on realism a demonstration that there is always an empirically equivalent 'rival' to any accepted theory would have—even if such a demonstration could be produced. Certainly in the case of the version of realism that I defend—structural realism—such a demonstration would have precisely no impact: two empirically equivalent theories are, according to structural realism, cognitively indistinguishable. (shrink)
The paper explicates unique events and investigates their epistemology. Explications of unique events as individuated, different, and emergent are philosophically uninteresting. Unique events are topics of why-questions that radically underdetermine all their potential explanations. Uniqueness that is relative to a level of scientific development is differentiated from absolute uniqueness. Science eliminates relative uniqueness by discovery of recurrence of events and properties, falsification of assumptions of why-questions, and methodological simplification e.g. by explanatory methodological reduction. Finally, an overview of contemporary philosophical disputes (...) that hinge on issues of uniqueness emphasizes its philosophical significance. (shrink)
Anthony Brueckner argues for a strong connection between the closure and the underdetermination argument for scepticism. Moreover, he claims that both arguments rest on infallibilism: In order to motivate the premises of the arguments, the sceptic has to refer to an infallibility principle. If this were true, fallibilists would be right in not taking the problems posed by these sceptical arguments seriously. As many epistemologists are sympathetic to fallibilism, this would be a very interesting result. However, in this paper (...) I will argue that Brueckner’s claims are wrong: The closure and the underdetermination argument are not as closely related as he assumes and neither rests on infallibilism. Thus even a fallibilist should take these arguments to raise serious problems that must be dealt with somehow. (shrink)
Quantum field theory (QFT) presents a genuine example of the underdetermination of theory by empirical evidence. There are variants of QFT—for example, the standard textbook formulation and the rigorous axiomatic formulation—that are empirically indistinguishable yet support different interpretations. This case is of particular interest to philosophers of physics because, before the philosophical work of interpreting QFT can proceed, the question of which variant should be subject to interpretation must be settled. New arguments are offered for basing the interpretation of (...) QFT on a rigorous axiomatic variant of the theory. The pivotal considerations are the roles that consistency and idealization play in this case. *Received June 2009; revised August 2009. †To contact the author, please write to: Department of Philosophy, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada; e‐mail: firstname.lastname@example.org. (shrink)
I present an argument that encapsulates the view that theory is underdetermined by evidence. I show that if we accept Williamson's equation of evidence and knowledge, then this argument is question-begging. I examine ways of defenders of underdetermination may avoid this criticism. I also relate this argument and my critique to van Fraassen's constructive empiricism.
The basic aim of Alvin Goldman’s approach to epistemology, and the tradition it represents, is naturalistic; that is, epistemological theories in this tradition aim to identify the naturalistic, nonnormative criteria on which justified belief supervenes (Goldman, 1986; Markie, 1997). The basic method of Goldman’s epistemology, and the tradition it represents, is the reflective equilibrium test; that is, epistemological theories in this tradition are tested against our intuitions about cases of justified and unjustified belief (Goldman, 1986; Markie, 1997). I will argue (...) that the prospect of having to reject their standard methodology is one epistemologists have to take very seriously; and I will do this by arguing that some current rival theories of epistemic justification are in fact in reflective equilibrium with our intuitions about cases of justified and unjustified belief. That is, I will argue that intuition underdetermines theory choice in epistemology, in much the way that observation underdetermines theory choices in empirical sciences. If reflective equilibrium leads to the underdetermination problem I say it leads to, then it cannot satisfy the aims of contemporary epistemology, and so cannot serve as its standard methodology. (shrink)
Advocates of the "strong programme" in the sociology of knowledge have argued that, because scientific theories are "underdetermined" by data, sociological factors must be invoked to explain why scientists believe the theories they do. I examine this argument, and the responses to it by J.R. Brown (1989) and L. Laudan (1996). I distinguish between a number of different versions of the underdetermination thesis, some trivial, some substantive. I show that Brown's and Laudan's attempts to refute the sociologists' argument fail. (...) Nonetheless, the sociologists' argument falls to a different criticism, for the version of the underdetermination thesis that the argument requires, has not been shown to be true. (shrink)
The underdetermination of theory by data argument (UD) is traditionally construed as an argument that tells us that we ought to favour an anti-realist position over a realist position. I argue that when UD is constructed as an argument saying that theory choice is to proceed between theories that are empirically equivalent and adequate to the phenomena up until now, the argument will not favour constructive empiricism over realism. A constructive empiricist cannot account for why scientists are reasonable in (...) expecting one theory to be empirically adequate rather than another, given the criteria he suggests for theory choice. (shrink)
It is commonly believed that Quine's principal argument for the Indeterminacy of Translation requires an untenably strong account of the underdetermination of theories by evidence, namely that that two theories may be compatible with all possible evidence for them and yet incompatible with each other. In this article, I argue that Quine's conclusion that translation is indeterminate can be based upon the weaker, uncontroversial conception of theoretical underdetermination, in conjunction with a weak reading of the 'Gavagai' argument which (...) establishes the underdetermination of the sense and reference of subsentential terms. If underdetermination is considered to be a widespread phenomenon in science, or in inductive reasoning more generally, then the Indeterminacy of Translation will be widespread too. Finally, I briefly consider two issues concerning the scope of this conclusion about the Indeterminacy of Translation: first, whether the argument presupposes behaviourism; and second, whether indeterminacy is restricted to the case of radical translation. I argue that the answer to both these questions is negative, and thus that the thesis of semantic indeterminacy remains relevant to those who disagree with Quine about some issues concerning the nature of mind and language. (shrink)
Two of W. V. Quine''s most familiar doctrines are his endorsement of the distinction between underdetermination and indeterminacy, and his rejection of the distinction between analytic and synthetic truths. The author argues that these two doctrines are incompatible. In terms wholly acceptable to Quine, and based on the underdetermination/indeterminacy distinction, the author draws an exhaustive and exclusive distinction between two kinds of true sentences, and then argues that this corresponds to the traditional analytic/synthetic distinction. In an appendix the (...) author expands on one aspect of the underdetermination/indeterminacy distinction, as construed here, and discusses, in passing, some of Quine''s more general views on truth. (shrink)
According to the thesis of semantic underdetermination, most sentences of a natural language lack a definite semantic interpretation. This thesis supports an argument against the use of natural language as an instrument of thought, based on the premise that cognition requires a semantically precise and compositional instrument. In this paper we examine several ways to construe this argument, as well as possible ways out for the cognitive view of natural language in the introspectivist version defended by Carruthers. Finally, we (...) sketch a view of the role of language in thought as a specialized tool, showing how it avoids the consequences of semantic underdetermination. (shrink)
The first part of this paper discusses Quine’s views on underdetermination of theory by evidence, and the indeterminacy of translation, or meaning, in relation to certain physical theories. The underdetermination thesis says different theories can be supported by the same evidence, and the indeterminacy thesis says the same component of a theory that is underdetermined by evidence is also meaning indeterminate. A few examples of underdetermination and meaning indeterminacy are given in the text. In the second part (...) of the paper, Quine’s scientific realism is discussed briefly, along with some of the difficulties encountered when considering the ‘truth’ of different empirically equivalent theories. It is concluded that the difference between underdetermination and indeterminacy, while significant, is not as great as Quine claims. It just means that after we have chosen a framework theory, from a number of empirically equivalent ones, we still have further choices along two different dimensions. (shrink)
Underdetermination is a relation between evidence and theory. More accurately, it is a relation between the propositions that express the (relevant) evidence and the propositions that constitute the theory. Evidence is said to underdetermine theory. This may mean two things. First, the evidence cannot prove the truth of the theory. Second, the evidence cannot render the theory probable. Let’s call the first deductive underdetermination, and the second inductive (or ampliative) underdetermination. Both kinds of claim are supposed to (...) have a certain epistemic implication, viz., that belief in theory is never warranted by the evidence. This is the underdetermination thesis. (shrink)
Kyle Stanford’s arguments against scientific realism are assessed, with a focus on the underdetermination of theory by evidence. I argue that discussions of underdetermination have neglected a possible symmetry which may ameliorate the situation.
Classical and quantum field theory provide not only realistic examples of extant notions of empirical equivalence, but also new notions of empirical equivalence, both modal and occurrent. A simple but modern gravitational case goes back to the 1890s, but there has been apparently total neglect of the simplest relativistic analog, with the result that an erroneous claim has taken root that Special Relativity could not have accommodated gravity even if there were no bending of light. The fairly recent acceptance of (...) nonzero neutrino masses shows that widely neglected possibilities for nonzero particle masses have sometimes been vindicated. In the electromagnetic case, there is permanent underdetermination at the classical and quantum levels between Maxwell's theory and the one-parameter family of Proca's electromagnetisms with massive photons, which approximate Maxwell's theory in the limit of zero photon mass. While Yang–Mills theories display similar approximate equivalence classically, quantization typically breaks this equivalence. A possible exception, including unified electroweak theory, might permit a mass term for the photons but not the Yang–Mills vector bosons. Underdetermination between massive and massless (Einstein) gravity even at the classical level is subject to contemporary controversy. (shrink)
The old antagonism between the Quinean and the Duhemian view on underdetermination is reexamined. In this respect, two theses will be defended. First, it is argued that the main differences between Quine's and Duhem's versions of underdetermination derive from a different attitude towards the history of science. While Quine considered underdetermination from an ahistorical, a logical point of view, Duhem approached it as a distinguished historian of physics. On this basis, a logical and a historical version of (...) the underdetermination thesis can be distinguished. The second thesis of the article is that the main objections against underdetermination are fatal only to the logical rendering. Taken together, the two theses constitute a defence of underdetermination. (shrink)
Current discussion of scientific realism and antirealism often cites Pierre Duhem's argument for the underdetermination of theory choice by evidence. Participants draw on an account of his underdetermination thesis that is familiar, but incomplete. The purpose of this article is to complete the familiar account. I argue that a closer look at Duhem's The aim and structure of physical theory (1914) suggests that the rationale for his underdetermination thesis comes from his philosophy of scientific language. I explore (...) how an understanding of physical laws as symbolic is meant to support the thesis. In the course of my argument, I point out that Duhemian underdetermination is not meta-practical but grounded in the practice of science, specifically in the scientist's use of instruments and measurement techniques. Measurement has a significant limitation, according to Duhem: it always involves approximation and a degree of experimental error. Consequently, it cannot overcome the gap between the ordinary, concrete language of observation and the (abstract and symbolic) mathematical language of science. Moreover, Duhem argues that the use of instruments in experiment invokes whole groups of theories. I contend that, ultimately, this reliance on auxiliary assumptions-which makes possible the use of instruments-is the foundation of his thesis and that recognizing this completes the familiar account of his underdetermination argument. (shrink)
The antirealist argument from the underdetermination of theories by data relies on the premise that the empirical content of a theory is the only determinant of its belief-worthiness (premise NN). Several authors have claimed that the antirealist cannot endorse NN, on pain of internal inconsistency. I concede this point. Nevertheless, this refutation of the underdetermination argument fails because there are weaker substitutes for NN that will serve just as well as a premise to the argument. On the other (...) hand, antirealists have not made a convincing casefor NN (or its weaker substitutes) either. In particular, I criticize van Fraassen's recent claim that all ampliative rules in epistemology must be rejected on the grounds that they lead to incoherence. The status of the underdetermination argument remains unsettled. (shrink)
According to the no miracles argument, scientific realism provides the only satisfactory explanation of the predictive success of science. It is argued in the present article that a different explanatory strategy, based on the posit of limitations to the underdetermination of scientific theory building by the available empirical data, offers a more convincing understanding of scientific success.
The underdetermination of theory by data obtains when, inescapably, evidence is insufficient to allow scientists to decide responsibly between rival theories. One response to would-be underdetermination is to deny that the rival theories are distinct theories at all, insisting instead that they are just different formulations of the same underlying theory; we call this the identical rivals response. An argument adapted from John Norton suggests that the response is presumptively always appropriate, while another from Larry Laudan and (...) Jarrett Leplin suggests that the response is never appropriate. Arguments from Einstein for the special and general theories of relativity may fruitfully be seen as instances of the identical rivals response; since Einstein’s arguments are generally accepted, the response is at least sometimes appropriate. But when is it appropriate? We attempt to steer a middle course between Norton’s view and that of Laudan and Leplin: the identical rivals response is appropriate when there is good reason for adopting a parsimonious ontology. Although in simple cases the identical rivals response need not involve any ontological difference between the theories, in actual scientific cases it typically requires treating apparent posits of the various theories as mere verbal ornaments or computational conveniences. Since these would-be posits are not now detectable, there is no perfectly reliable way to decide whether we should eliminate them or not. As such, there is no rule for deciding whether the identical rivals response is appropriate or not. Nevertheless, there are considerations that suggest for and against the response; we conclude by suggesting two of them. (shrink)
This paper criticizes the attempt to found the epistemological doctrine that all theories are evidentially underdetermined on the thesis that all theories have empirically equivalent rivals. The criticisms focus on the role of auxiliary hypotheses in prediction. It is argued, in particular, that if auxiliaries are underdetermined, then the thesis of empirical equivalence is undecidable. The inference from empirical equivalence to the underdetermination of total theories would seem to survive the criticisms, because total theories do not require auxiliaries to (...) yield observational consequences. It is shown that, nevertheless, underdetermination cannot be established for total theories. (shrink)
At the heart of the underdetermination of scientific theory by evidence is the simple idea that the evidence available to us at a given time may fail to determine what beliefs we should hold in response to it. In a textbook example, if I all I know is that you spent $10 on apples and oranges and that apples cost $1 while oranges cost $2, then I know that you did not buy six oranges, but I do not know (...) whether you bought one orange and eight apples, two oranges and six apples, and so on. A simple scientific example can be found in the rationale behind the sensible methodological adage that “correlation does not imply causation”. If watching lots of cartoons causes children to be more violent in their playground behavior then we should (barring complications) expect to find a correlation between levels of cartoon viewing and violent playground behavior. But that is also what we would expect to find if children who are prone to violence tend to enjoy and seek out cartoons more than other children, or if propensities to violence and increased cartoon viewing are both caused by some third factor (like general parental neglect or excessive consumption of jellybeans). So a high correlation between cartoon viewing and violent playground behavior is evidence that (by itself) simply underdetermines what we should believe about the causal relationship between these two activities. As we will see, however, the challenge of distinguishing correlation from causation is far from the only important circumstance in which underdetermination is thought to arise in scientific inquiry. (shrink)
Four empirically equivalent versions of general relativity, namely standard GR, Lorentz-invariant gravitational theory, and the gravitational gauge theories of the Lorentz and translation groups, are investigated in the form of a case study for theory underdetermination. The various ontological indeterminacies (both underdetermination and inscrutability of reference) inherent in gravitational theories are analyzed in a detailed comparative study. The concept of practical underdetermination is proposed, followed by a discussion of its adequacy to describe scientific progress.
When trying to assess the implications of recent deep shifts in the philosophy of science for the broader arena of medicine, the theme that most readily comes to mind is underdetermination . In scientific research one always hopes for determination: that the world should determine the observations we make of it; that evidence should determine the theories we adopt; that the practice of science should determine results independent of the sort of society in which that practice takes place. In (...) this essay, doubts cast on each of these ideas by recent work in philosophy of science will be discussed and the consequences for philosophy of medicine will be indicated. Keywords: Underdetermination, retroduction, Kuhn, observation as theoryladen, realism, theory appraisal, values in science, social dimensions of science CiteULike Connotea Del.icio.us What's this? (shrink)
Underdetermination can take many forms apart from the familiar case of the underdetermination of nature’s laws by the observed phenomena. Of particular interest here is the potential of underdetermination of nature’s phenomena by nature’s laws. The paper considers various ways in which this prospect might come to be realized, and goes on to consider some of the wider implications of this circumstance.
Advocates have sought to prove that underdetermination obtains because all theories have empirical equivalents. But algorithms for generating empirical equivalents simply exchange underdetermination for familiar philosophical chestnuts, while the few convincing examples of empirical equivalents will not support the desired sweeping conclusions. Nonetheless, underdetermination does not depend on empirical equivalents: our warrant for current theories is equally undermined by presently unconceived alternatives as well-confirmed merely by the existing evidence, so long as this transient predicament recurs for each (...) theory and body of evidence we consider. The historical record supports the claim that this recurrent, transient underdetermination predicament is our own. (shrink)
The goal of this article is to show that the structuralist approachprovides a powerful framework for the analysis of certain holistic phenomena in empirical theories.We focus on two aspects of holism. The first refers to the involvement of comprehensive complexes of hypothesesin the theoretical treatment of systems regarded in isolation. By contrast, the second refers to thecorrelation between the theoretical descriptions of different systems. It is demonstrated how these two aspectscan be analysed by making use of the structuralist notion of (...) theory-nets, and how they are reflected by a refinedversion of the Ramsey sentence. Furthermore, it is argued that there exists a tight correlation between theoccurrence of these two holistic phenomena, a specific form of underdetermination of terms which occur in thefundamental principles of an empirical theory, and the shaping of the theory's protective belt. After having dealtwith these questions in abstracto, the relevance of these considerations for a better understanding of the dynamicsof empirical theories is demonstrated in a concrete case study. It refers to the role holistic phenomenaplayed in the investigation of the anomalous advance of Mercury's perihelion and in the various attempts to eliminate this anomaly. (shrink)
The underdetermination of theory by evidence must be distinguished from holism. The latter is a doctrine about the testing of scientific hypotheses; the former is a thesis about empirically adequate logically incompatible global theories or "systems of the world". The distinction is crucial for an adequate assessment of the underdetermination thesis. The paper shows how some treatments of underdetermination are vitiated by failure to observe this distinction, and identifies some necessary conditions for the existence of multiple empirically (...) equivalent global theories. We consider how empiricists should respond to the possibility of such systems of the world. (shrink)
: In the shadowy world between philosophy of science and ethics lie the paired concepts of underdetermination and incommensurability. Typically, scientific evidence underdetermines the hypotheses tested in research studies, providing neither proof nor disproof. As a result, scientists must judge the weight of the evidence, and in doing so, bring scientific and extrascientific values to bear in their approaches to assessing and interpreting the evidence. When different scientists employ very different values, their views are said to be incommensurable. Less (...) prominent differences represent partial incommensurabilities. The definitions and analyses provided by McMullin and by Veatch and Stempsey lay the foundation for the description of partial incommensurabilities in the current practice of assessing and interpreting epidemiologic evidence. This practice is called "causal inference" and is undertaken for the purpose of making causal conclusions and public health recommendations from population-based studies of exposures and diseases. Following the work of Bayley and Longino, several suggestions are examined for dealing with the partial incommensurabilities found in the general practice of causal inference in contemporary epidemiology. Two specific examples illustrate these ideas: studies on the relationship between induced abortion and breast cancer and those on the relationship between moderate alcohol consumption and breast cancer. (shrink)
In this paper I criticize one of the most convincing recent attempts to resist the underdetermination thesis, Laudan’s argument from indirect confirmation. Laudan highlights and rejects a tacit assumption of the underdetermination theorist, namely that theories can be confirmed only by empirical evidence that follows from them. He shows that once we accept that theories can also be confirmed indirectly, by evidence not entailed by them, the skeptical conclusion does not follow. I agree that Laudan is right to (...) reject this assumption, but I argue that his explanation of how the rejection of this assumption blocks the skeptical conclusion is flawed. I conclude that the argument from indirect confirmation is not effective against the underdetermination thesis. (shrink)
There are two ways that we might respond to the underdetermination of theory by data. One response, which we can call the agnostic response, is to suspend judgment: "Where scientific standards cannot guide us, we should believe nothing". Another response, which we can call the fideist response, is to believe whatever we would like to believe: "If science cannot speak to the question, then we may believe anything without science ever contradicting us". C.S. Peirce recognized these options and suggested (...) evading the dilemma. It is a Logical Maxim, he suggests, that there could be no genuine underdetermination. This is no longer a viable option in the wake of developments in modern physics, so we must face the dilemma head on. The agnostic and fideist responses to underdetermination represent fundamentally different epistemic viewpoints. Nevertheless, the choice between them is not an unresolvable struggle between incommensurable worldviews. There are legitimate considerations tugging in each direction. Given the balance of these considerations, there should be a modest presumption of agnosticism. This may conflict with Peirce's Logical Maxim, but it preserves all that we can preserve of the Peircean motivation. (shrink)
I examine the argument that scientific theories are typically 'underdetermined' by the data, an argument which has often been used to combat scientific realism. I deal with two objections to the underdetermination argument: (i) that the argument conflicts with the holistic nature of confirmation, and (ii) that the argument rests on an untenable theory/data dualism. I discuss possible responses to both objections, and argue that in both cases the proponent of underdetermination can respond in ways which are individually (...) plausible, but that the best response to the first objection conflicts with the best response to the second. Consequently underdetermination poses less of a problem for scientific realism than has often been thought. (shrink)