The no-miracles argument for realism and the pessimistic meta-induction for anti-realism pull in opposite directions. Structural Realism---the position that the mathematical structure of mature science reflects reality---relieves this tension.
SummaryThe main argument for scientific realism is that our present theories in science are so successful empirically that they can't have got that way by chance ‐ instead they must somehow have latched onto the blueprint of the universe. The main argument against scientific realism is that there have been enormously successful theories which were once accepted but are now regarded as false. The central question addressed in this paper is whether there is some reasonable way to have the best (...) of both worlds: to give the argument from scientific revolutions its full weight and yet still adopt some sort of realist attitude towards presently accepted theories in physics and elsewhere. I argue that there is such a way ‐ through structural realism, a position adopted by Poincare, and here elaborated and defended. (shrink)
Evidence-Based Medicine is a relatively new movement that seeks to put clinical med- icine on a firmer scientific footing. I take it as uncontroversial that medical practice should be based on best evidence-the interesting questions concern the details. This paper tries to move towards a coherent and unified account of best evidence in medicine, by exploring in particular the EBM position on RCTs (randomized controlled trials).
The evidence from randomized controlled trials (RCTs) is widely regarded as supplying the ‘gold standard’ in medicine—we may sometimes have to settle for other forms of evidence, but this is always epistemically second-best. But how well justified is the epistemic claim about the superiority of RCTs? This paper adds to my earlier (predominantly negative) analyses of the claims produced in favour of the idea that randomization plays a uniquely privileged epistemic role, by closely inspecting three related arguments from leading contributors (...) to the burgeoning field of probabilistic causality—Papineau, Cartwright and Pearl. It concludes that none of these further arguments supplies any practical reason for thinking of randomization as having unique epistemic power. IntroductionWhy the issue is of great practical importance—the ECMO casePapineau on the ‘virtues of randomization’Cartwright on causality and the ‘ideal’ randomized experimentPearl on randomization, nets and causesConclusion. (shrink)
Evidence-Based Medicine is a relatively new movement that seeks to put clinical medicine on a firmer scientific footing. I take it as uncontroversial that medical practice should be based on best evidence—the interesting questions concern the details. This paper tries to move towards a coherent and unified account of best evidence in medicine, by exploring in particular the EBM position on RCTs.
Obviously medicine should be evidence-based. The issues lie in the details: what exactly counts as evidence? Do certain kinds of evidence carry more weight than others? And how exactly should medicine be based on evidence? When it comes to these details, the evidence-based medicine movement has got itself into a mess – or so it will be argued. In order to start to resolve this mess, we need to go 'back to basics'; and that means turning to the philosophy of (...) science. The theory of evidence, or rather the logic of the interrelations between theory and evidence, has always been central to the philosophy of science – sometimes under the alias of the 'theory of confirmation'. When taken together with a little philosophical commonsense, this logic can help us move towards a position on evidence in medicine that is more sophisticated and defensible than anything that EBM has been able so far to supply. (shrink)
Are theories ‘underdetermined by the evidence’ in any way that should worry the scientific realist? I argue that no convincing reason has been given for thinking so. A crucial distinction is drawn between data equivalence and empirical equivalence. Duhem showed that it is always possible to produce a data equivalent rival to any accepted scientific theory. But there is no reason to regard such a rival as equally well empirically supported and hence no threat to realism. Two theories are empirically (...) equivalent if they share all consequences expressed in purely observational vocabulary. This is a much stronger requirement than has hitherto been recognised— two such ‘rival’ theories must in fact agree on many claims that are clearly theoretical in nature. Given this, it is unclear how much of an impact on realism a demonstration that there is always an empirically equivalent ‘rival’ to any accepted theory would have—even if such a demonstration could be produced. Certainly in the case of the version of realism that I defend—structural realism—such a demonstration would have precisely no impact: two empirically equivalent theories are, according to structural realism, cognitively indistinguishable. (shrink)
The debate about the relative epistemic weights carried in favour of a theory by predictions of new phenomena as opposed to accommodations of already known phenomena has a long history. We readdress the issue through a detailed re-examination of a particular historical case that has often been discussed in connection with it—that of Mendeleev and the prediction by his periodic law of the three ‘new’ elements, gallium, scandium and germanium. We find little support for the standard story that these predictive (...) successes were outstandingly important in the success of Mendeleev's scheme. Accommodations played an equal role—notably that of argon, the first of the ‘noble gases’ to be discovered; and the methodological situation in this chemical example turns out to be in interesting ways different from that in other cases—invariably from physics—that have been discussed in this connection. The historical episode when accurately analysed provides support for a different account of the relative weight of prediction and accommodation—one that is further articulated here. (shrink)
Fresnel's theory of light was (a) impressively predictively successful yet (b) was based on an "entity" (the elastic-solid ether) that we now "know" does not exist. Does this case "confute" scientific realism as Laudan suggested? Previous attempts (by Hardin and Rosenberg and by Kitcher) to defuse the episode's anti-realist impact. The strongest form of realism compatible with this case of theory-rejection is in fact structural realism. This view was developed by Poincare who also provided reasons to think that it is (...) the only realist view of theories that really makes sense. (shrink)
The topic of the paper is the "realism-Instrumentalism" debate concerning the status of scientific theories. Popper's contributions to this debate are critically examined. In the first part his arguments against instrumentalism are considered; it is claimed that none strikes home against better versions of the doctrine (specifically those developed by duhem and poincare). In the second part, Various arguments against realism propounded by duhem and/or poincare (and much discussed by more recent philosophers) are evaluated. These are the arguments from the (...) use of idealisations in science, From the "underdetermination" of scientific theories, And (especially) from the existence of radical scientific revolutions. A maximally strong version of realism-After due allowance has been made to these arguments-Is stated and defended. This position is close to popper's own "conjectural realism" but involves dropping entirely the idea that science has developed "via" theories possessing increasing verisimilitude. (shrink)
The paper presents a further articulation and defence of the view on prediction and accommodation that I have proposed earlier. It operates by analysing two accounts of the issue-by Patrick Maher and by Marc Lange-that, at least at first sight, appear to be rivals to my own. Maher claims that the time-order of theory and evidence may be important in terms of degree of confirmation, while that claim is explicitly denied in my account. I argue, however, that when his account (...) is analysed, Maher reveals no scientifically significant way in which the time-order counts, and that indeed his view is in the end best regarded as a less than optimally formulated version of my own. Lange has also responded to Maher by arguing that the apparent relevance of temporal considerations is merely apparent: what is really involved, according to Lange, is whether or not a hypothesis constitutes an "arbitrary conjunction." I argue that Lange's suggestion fails: the correct analysis of his and Maher's examples is that provided by my account. (shrink)
Ethics and epistemology in medicine are more closely and more interestingly intertwined than is usually recognized. To explore this relationship, I present a case study, clinical trials of extracorporeal membrane oxygenation (ECMO; an intervention for persistent pulmonary hypertension of the newborn).Three separate ethical issues that arise from this case study-whether or not it is ethical to perform a certain trial at all, whether stopping rules for trials are ethically mandated, and the issue of informed consent-are all shown to be intimately (...) related to epistemological judgments about the weight of evidence. Although ethical issues cannot, of course, be resolved by consideration of epistemological findings, I argue that no informed view of the ethical issues that are raised can be adopted without first taking an informed view of the evidential-epistemological ones. (shrink)
The paper presents a further articulation and defence of the view on prediction and accommodation that I have proposed earlier. It operates by analysing two accounts of the issue-by Patrick Maher and by Marc Lange-that, at least at first sight, appear to be rivals to my own. Maher claims that the time-order of theory and evidence may be important in terms of degree of confirmation, while that claim is explicitly denied in my account. I argue, however, that when his account (...) is analysed, Maher reveals no scientifically significant way in which the time-order counts, and that indeed his view is in the end best regarded as a less than optimally formulated version of my own. Lange has also responded to Maher by arguing that the apparent relevance of temporal considerations is merely apparent: what is really involved, according to Lange, is whether or not a hypothesis constitutes an "arbitrary conjunction." I argue that Lange's suggestion fails: the correct analysis of his and Maher's examples is that provided by my account. (shrink)
[Peter Lipton] From a reliabilist point of view, our inferential practices make us into instruments for determining the truth value of hypotheses where, like all instruments, reliability is a central virtue. I apply this perspective to second-order inductions, the inductive assessments of inductive practices. Such assessments are extremely common, for example whenever we test the reliability of our instruments or our informants. Nevertheless, the inductive assessment of induction has had a bad name ever since David Hume maintained that any attempt (...) to justify induction by means of an inductive argument must beg the question. I will consider how the inductive justification of induction fares from the reliabilist point of view. I will also consider two other well-known arguments that can be construed as inductive assessments of induction. One is the miracle argument, according to which the truth of scientific theories should be inferred as the best explanation of their predictive success; the other is the disaster argument, according to which we should infer that all present and future theories are false on the grounds that all past theories have been found to be false. \\\ [John Worrall] Science seems in some ways to have been remarkably successful. What does this success tell us about the epistemological status of current scientific claims? Peter Lipton considers various meta-inductive arguments each of which start from premises about science's 'track record'. I show that his endorsements of the 'strongest' of these are, on analysis, remarkably weak. I argue that this is a reflection of difficulties within the general epistemological framework that he adopts-that of reliabilism. Finally, I briefly outline the quite different approach that I take to this issue, in the process responding to Lipton's criticisms of the 'pessimistic meta-induction'. (shrink)
What is it reasonable to believe about our most successful scientific theories such as the general theory of relativity or quantum mechanics? That they are true, or at any rate approximately true? Or only that they successfully ‘save the phenomena’, by being ‘empirically adequate’? In earlier work I explored the attractions of a view called Structural Scientific Realism. This holds that it is reasonable to believe that our successful theories are structurally correct. In the first part of this paper I (...) shall explain in some detail what this thesis means and outline the reasons why it seems attractive. The second section outlines a number of criticisms that have none the less been brought against SSR in the recent literature; and the third and final section argues that, despite the fact that these criticisms might seem initially deeply troubling, the position remains viable. (shrink)
In a randomized clinical trial (RCT), a group of patients, initially assembled through a mixture of deliberation (involving explicit inclusion and exclusion criteria) and serendipity (which patients happen to walk into which doctor’s clinic while the trial is in progress), are divided by some random process into an experimental group (members of which will receive the therapy under test) and a control group (members of which will receive some other treatment – perhaps placebo, perhaps the currently standard treatment for the (...) condition at issue). In a ‘double blind’ trial neither the patient nor the clinician knows to which of the groups a particular patient belongs. The results of double blind randomized controlled trials are almost universally regarded as providing the ‘gold standard’ for evidence in medicine. Fairly extreme claims to this effect can be found in the literature. For example the statistician Tukey wrote (1977, p. 679) “almost the only source of reliable evidence [in medicine] … is that obtained from … carefully conducted randomised trials”. And the clinician Victor Herbert claimed (1977, p. 690) “..the only source of reliable evidence rising to the level of proof about the usefulness of any new therapy is that obtained from wellplanned and carefully conducted randomized, and, where possible, coded (double blind) clinical trials. [Other] studies may point in a direction, but cannot be evidence as lawyers use the term evidence to mean something probative … [that is] tending to prove or actually proving”. Finally, the still very influential movement in favour of ‘Evidence Based Medicine’ (EBM) that began at McMaster University in the 1980s was initially often regarded as endorsing the claim that only RCTs provide real scientifically telling evidence. (shrink)
Having been neglected or maligned for most of this century, Newton's method of 'deduction from the phenomena' has recently attracted renewed attention and support. John Norton, for example, has argued that this method has been applied with notable success in a variety of cases in the history of physics and that this explains why the massive underdetermination of theory by evidence, seemingly entailed by hypothetico-deductive methods, is invisible to working physicists. This paper, through a detailed analysis of Newton's deduction of (...) one particular 'proposition' in optics 'from the phenomena', gives a clearer account than hitherto of the method - highlighting the fact that it is really one of deduction from the phenomena plus 'background knowledge'. It argues, that, although the method has certain heuristic virtues, examination of its putative accreditational strengths reveals a range of important problems that its defenders have yet adequately to address. (shrink)
From a reliabilist point of view, our inferential practices make us into instruments for determining the truth value of hypotheses where, like all instruments, reliability is a central virtue. I apply this perspective to second-order inductions, the inductive assessments of inductive practices. Such assessments are extremely common, for example whenever we test the reliability of our instruments or our informants. Nevertheless, the inductive assessment of induction has had a bad name ever since David Hume maintained that any attempt to justify (...) induction by means of an inductive argument must beg the question. I will consider how the inductive justification of induction fares from the reliabilist point of view. I will also consider two other well-known arguments that can be construed as inductive assessments of induction. One is the miracle argument, according to which the truth of scientific theories should be inferred as the best explanation of their predictive success; the other is the disaster argument, according to which we should infer that all present and future theories are false on the grounds that all past theories have been found to be false. \\\ [John Worrall] Science seems in some ways to have been remarkably successful. What does this success tell us about the epistemological status of current scientific claims? Peter Lipton considers various meta-inductive arguments each of which start from premises about science's 'track record'. I show that his endorsements of the 'strongest' of these are, on analysis, remarkably weak. I argue that this is a reflection of difficulties within the general epistemological framework that he adopts-that of reliabilism. Finally, I briefly outline the quite different approach that I take to this issue, in the process responding to Lipton's criticisms of the 'pessimistic meta-induction'. (shrink)
Worrall argued that structural realism provides a ‘synthesis’ of the main pro-realist argument – the ‘No Miracles Argument’, and the main anti-realist argument – the ‘Pessimistic Induction’. More recently, however, it has been claimed that each of these arguments is an instance of the same probabilistic fallacy – sometimes called the ‘base-rate fallacy’. If correct, this clearly seems to undermine structural realism and Magnus and Callender have indeed claimed that both arguments are fallacious and ‘without [them] we lose the rationale (...) for … structural realism ’. I here argue that what have been shown to be fallacious are simply misguided formalisations of ‘the’ arguments and that when they are properly construed they continue to provide powerful motivation for favouring structural realism. (shrink)
This essay criticizes John Norton's 2010 defense of the thesis that “all induction is local.” Norton's local inductions are bound, if cogent, to involve general principles, and the need to accredit these general principles threatens to lead to all the usual problems associated with the ‘problem of induction’. Norton, in fact, recognizes this threat, but his responses are inadequate. The right response involves not induction but a sophisticated version of hypothetico-deduction. Norton's secondary thesis—that if there is a general account of (...) cogent scientific reasoning, then it is certainly not the one supported by personalist Bayesians—is also criticized. (shrink)
Nicholas Jardine offers here an edition and the first translation into English of Johannes Kepler's A Defence of Tycho against Ursus. He accompanies this with essays on the provenance of the treatise - the circumstances which provoked Kepler to write it, an analysis of its strategy, style and historical sources and of the contents of Ursus' Treatise on Astronomical Hypotheses to which Kepler was replying. Dr Jardine also provides three extended interpretive essays on the intrinsic interest and historical significance of (...) the work. (shrink)
Abstract This paper attempts to clarify the debate between those philosophers who hold that the development of science is governed by objective standards of rationality and those sociologists of science who deny this. In particular it focuses on the debate over the ?symmetry thesis?. Bloor and Barnes argue that a properly scientific approach to science itself demands that an investigator should seek the same general type of explanation for all decisions and actions by past scientists, quite independently of whether or (...) not she or he happens to agree with those decisions or approve those actions as ?correct? or ?rational?. I try to improve on previous treatments of the ?rationalist? position (by Lakatos, Laudan, Newton?Smith and Brown) and clarify the exact asymmetries to which the ?rationalist? is, and is not, committed. (shrink)
Proofs and Refutations is essential reading for all those interested in the methodology, the philosophy and the history of mathematics. Much of the book takes the form of a discussion between a teacher and his students. They propose various solutions to some mathematical problems and investigate the strengths and weaknesses of these solutions. Their discussion raises some philosophical problems and some problems about the nature of mathematical discovery or creativity. Imre Lakatos is concerned throughout to combat the classical picture of (...) mathematical development as a steady accumulation of established truths. He shows that mathematics grows instead through a richer, more dramatic process of the successive improvement of creative hypotheses by attempts to 'prove' them and by criticism of these attempts: the logic of proofs and refutations. (shrink)
In this work the problem of scientific ontology is applied to the general issue of scientific realism. In addition, particular ontological issues raised by particular theories or fields are explored.
Science, and in particular the process of theory-change in science, formed the major inspiration for Karl Popper's whole philosophy. Popper learned about the success of Einstein's revolutionary new theory in 1919, and Einstein ‘became a dominant influence on my thinking—in the long run perhaps the most important influence of all.’ Popper explained why: In May, 1919, Einstein's eclipse predictions were successfully tested by two British expeditions. With these tests a new theory of gravitation and a new cosmology suddenly appeared, not (...) just as a mere possibility, but as an improvement on Newton—a better approximation to the truth … The general assumption of the truth of Newton's theory was of course the result of its incredible success, culminating in the discovery of the planet Neptune … Yet in spite of all this, Einstein had managed to produce a real alternative and, it appeared, a better theory … Like Newton himself, he predicted new effects within our solar system. And some of these predictions, when tested, had now proved successful. (shrink)