Many philosophers have argued that a hypothesis is better confirmed by some data if the hypothesis was not specifically designed to fit the data. ‘Prediction’, they argue, is superior to ‘accommodation’. Others deny that there is any epistemic advantage to prediction, and conclude that prediction and accommodation are epistemically on a par. This paper argues that there is a respect in which accommodation is superior to prediction. Specifically, the information that the data was accommodated rather than predicted suggests that the (...) data is less likely to have been manipulated or fabricated, which in turn increases the likelihood that the hypothesis is correct in light of the data. In some cases, this epistemic advantage of accommodation may even outweigh whatever epistemic advantage there might be to prediction, making accommodation epistemically superior to prediction all things considered. (shrink)
Methods of machine learning (ML) are gradually complementing and sometimes even replacing methods of classical statistics in science. This raises the question whether ML faces the same methodological problems as classical statistics. This paper sheds light on this question by investigating a long-standing challenge to classical statistics: the reference class problem (RCP). It arises whenever statistical evidence is applied to an individual object, since the individual belongs to several reference classes and evidence might vary across them. Thus, the problem consists (...) in choosing a suitable reference class for the individual. I argue that deep neural networks (DNNs) are able to overcome specific instantiations of the RCP. Whereas the criteria of narrowness, reliability, and homogeneity, that have been proposed to determine a suitable reference class, pose an inextricable tradeoff to classical statistics, DNNs are able to satisfy them in some situations. On the one hand, they can exploit the high dimensionality in big-data settings. I argue that this corresponds to the criteria of narrowness and reliability. On the other hand, ML research indicates that DNNs are generally not susceptible to overfitting. I argue that this property is related to a particular form of homogeneity. Taking both aspects together reveals that there are specific settings in which DNNs can overcome the RCP. (shrink)
This note contains a corrective and a generalization of results by Borsboom et al. (2008), based on Heesen and Romeijn (2019). It highlights the relevance of insights from psychometrics beyond the context of psychological testing.
A self-fulfilling prophecy is, roughly, a prediction that brings about its own truth. Although true predictions are hard to fault, self-fulfilling prophecies are often regarded with suspicion. In this article, we vindicate this suspicion by explaining what self-fulfilling prophecies are and what is problematic about them, paying special attention to how their problems are exacerbated through automated prediction. Our descriptive account of self-fulfilling prophecies articulates the four elements that define them. Based on this account, we begin our critique by showing (...) that typical self-fulfilling prophecies arise due to mistakes about the relationship between a prediction and its object. Such mistakes—along with other mistakes in predicting or in the larger practical endeavor—are easily overlooked when the predictions turn out true. Thus we note that self-fulfilling prophecies prompt no error signals; truth shrouds their mistakes from humans and machines alike. Consequently, self-fulfilling prophecies create several obstacles to accountability for the outcomes they produce. We conclude our critique by showing how failures of accountability, and the associated failures to make corrections, explain the connection between self-fulfilling prophecies and feedback loops. By analyzing the complex relationships between accuracy and other evaluatively significant features of predictions, this article sheds light both on the special case of self-fulfilling prophecies and on the ethics of prediction more generally. (shrink)
We characterize a type of functional explanation that addresses why a homologous trait originating deep in the evolutionary history of a group remains widespread and largely unchanged across the group’s lineages. We argue that biologists regularly provide this type of explanation when they attribute conserved functions to phenotypic and genetic traits. The concept of conserved function applies broadly to many biological domains, and we illustrate its importance using examples of molecular sequence alignments at the intersection of evolution and cell biology. (...) We use these examples to show how the study of conserved functions can integrate knowledge of a trait’s causal effects on fitness and its history of natural selection without invoking adaptation. We also show how conserved function provides a novel basis for addressing objections against evolutionary functions raised by Robert Cummins. (shrink)
Based on a review of several “anomalies” in research using implicit measures, Machery (2021) dismisses the modal interpretation of participant responses on implicit measures and, by extension, the value of implicit measures. We argue that the reviewed findings are anomalies only for specific—influential but long-contested—accounts that treat responses on implicit measures as uncontaminated indicators of trait-like unconscious representations that coexist with functionally independent conscious representations. However, the reviewed findings are to-be-expected “normalities” when viewed from the perspective of long-standing alternative frameworks (...) that treat responses on implicit measures as the product of dynamic processes that operate on momentarily activated, consciously accessible information. Thus, although we agree with Machery that the modal view is empirically unsupported, we argue that implicit measures can make a valuable contribution to understanding the complexities of human behavior if they are used wisely in a way that acknowledges what they can and cannot do. (shrink)
It is relatively easy to state that information retrieval is a scientific discipline but it is rather difficult to understand why it is science because what is science is still under debate in the philosophy of science. To be able to convince others that IR is science, our ability to explain why is crucial. To explain why IR is a scientific discipline, we use a theory and a model of scientific study, which were proposed recently. The explanation involves mapping the (...) knowledge structure of IR to that of well-known scientific disciplines like physics. In addition, the explanation involves identifying the common aim, principles and assumptions in IR and in well-known scientific disciplines like physics, so that they constrain the scientific investigation in IR in a similar way as in physics. Therefore, there are strong similarities in terms of the knowledge structure and the constraints of the scientific investigations between IR and scientific disciplines like physics. Based on such similarities, IR is considered a scientific discipline. (shrink)
Neuroscientists have in recent years turned to building models that aim to generate predictions rather than explanations. This “predictive turn” has swept across domains including law, marketing, and neuropsychiatry. Yet the norms of prediction remain undertheorized relative to those of explanation. I examine two styles of predictive modeling and show how they exemplify the normative dynamics at work in prediction. I propose an account of how predictive models, conceived of as technological devices for aiding decision-making, can come to be adequate (...) for purposes that are defined by both their guiding research questions and their larger social context of application. (shrink)
Originating in the field of biology, the concept of the hybrid has proved to be influential and effective in historical studies, too. Until now, however, the idea of hybrid knowledge has not been fully explored in the historiography of pre-modern science. This article examines the history of pre-Copernican astronomy and focuses on three case studies—Alexandria in the second century CE; Baghdad in the ninth century; and Constantinople in the fourteenth century—in which hybridization played a crucial role in the development of (...) astronomical knowledge and in philosophical controversies about the status of astronomy and astrology in scholarly and/or institutional settings. By establishing a comparative framework, this analysis of hybrid knowledge highlights different facets of hybridization and shows how processes of hybridization shaped scientific controversies. (shrink)
Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...) are successfully used in exploratory contexts, scientists face a whole new challenge in forming the concepts required for understanding underlying mechanisms. (shrink)
In this paper, I critically evaluate several related, provocative claims made by proponents of data-intensive science and “Big Data” which bear on scientific methodology, especially the claim that scientists will soon no longer have any use for familiar concepts like causation and explanation. After introducing the issue, in Section 2, I elaborate on the alleged changes to scientific method that feature prominently in discussions of Big Data. In Section 3, I argue that these methodological claims are in tension with a (...) prominent account of scientific method, often called “Inference to the Best Explanation”. Later on, in Section 3, I consider an argument against IBE that will be congenial to proponents of Big Data, namely, the argument due to Roche and Sober Analysis, 73:659–668, that “explanatoriness is evidentially irrelevant.” This argument is based on Bayesianism, one of the most prominent general accounts of theory-confirmation. In Section 4, I consider some extant responses to this argument, especially that of Climenhaga Philosophy of Science, 84:359–368,. In Section 5, I argue that Roche and Sober’s argument does not show that explanatory reasoning is dispensable. In Section 6, I argue that there is good reason to think explanatory reasoning will continue to prove indispensable in scientific practice. Drawing on Cicero’s oft-neglected De Divinatione, I formulate what I call the “Ciceronian Causal-nomological Requirement”, which states, roughly, that causal-nomological knowledge is essential for relying on correlations in predictive inference. I defend a version of the CCR by appealing to the challenge of “spurious correlations,” chance correlations which we should not rely upon for predictive inference. In Section 7, I offer some concluding remarks. (shrink)
Affective forecasting refers to the ability to predict future emotions, a skill that is essential to making decisions on a daily basis. Studies of the concept have determined that individuals are often inaccurate in making such affective forecasts. However, the mechanisms of these errors are not yet clear. In order to better understand why affective forecasting errors occur, this article seeks to trace the theoretical roots of this theory with a focus on its multidisciplinary history. The roots of affective forecasting (...) lie mainly in economics, with early claims positing that utility played a role in decision-making. Furthermore, the philosopher Jeremy Bentham’s descriptions of utilitarianism played a major role in our understanding of whether to define utility as a hedonic quality. The birth of behavioural economics resulted in a paradigm shift, introducing the concept of cognitive biases as influences on the accuracy of predicted utility. Daniel Gilbert and Timothy Wilson, the earliest researchers of affective forecasting errors, have proceeded with the concept of the accuracy of predicted affective utility to conduct experiments that seek to determine why our predictions of future affect are inaccurate and how such errors play a role in our decision-making. (shrink)
The IHME Covid-19 prediction model has been one of the most influential Covid models in the United States. Early on, it received heavy criticism for understating the extent of the epidemic. I argue that this criticism was based on a misunderstanding of the model. The model was best interpreted not as attempting to forecast the actual course of the epidemic. Rather, it was attempting to make a conditional projection: telling us how the epidemic would unfold, given certain assumptions. This misunderstanding (...) of the IHME’s model prevented the public from seeing how dire the model’s projections actually were. (shrink)
In this paper, I examine Cicero’s oft-neglected De Divinatione, a dialogue investigating the legitimacy of the practice of divination. First, I offer a novel analysis of the main arguments for divination given by Quintus, highlighting the fact that he employs two logically distinct argument forms. Next, I turn to the first of the main arguments against divination given by Marcus. Here I show, with the help of modern probabilistic tools, that Marcus’ skeptical response is far from the decisive, proto-naturalistic assault (...) on superstition that it is sometimes portrayed to be. Then, I offer an extended analysis of the second of the main arguments against divination given by Marcus. Inspired by Marcus’ second main argument, I formulate, explicate, and defend a substantive principle of scientific methodology that I call the “Ciceronian Causal-Nomological Requirement” (CCR). Roughly, this principle states that causal knowledge is essential for relying on correlations in predictive inference. Although I go on to argue that Marcus’ application of the CCR in his debate with Quintus is dialectically inadequate, I conclude that De Divinatione deserves its place in Cicero’s philosophical corpus, and that ultimately, its significance for the history and philosophy of science ought to be recognized. (shrink)
The traditional philosophy of science approach to prediction leaves little room for appreciating the value and potential of imprecise predictions. At best, they are considered a stepping stone to more precise predictions, while at worst they are viewed as detracting from the scientific quality of a discipline. The aim of this paper is to show that imprecise predictions are undervalued in philosophy of science. I review the conceptions of imprecise predictions and the main criticisms levelled against them: (i) that they (...) cannot aid in model selection and improvement, and (ii) that they cannot support effective interventions in practical decision making. I will argue against both criticisms, showing that imprecise predictions have a circumscribed but important and legitimate place in the study of complex, heterogeneous systems. The argument is illustrated and supported by an example from conservation biology, where imprecise models were instrumental in saving the kōkako from extinction. (shrink)
We explore three questions about Earth system modeling that are of both scientific and philosophical interest: What kind of understanding can be gained via complex Earth system models? How can the limits of understanding be bypassed or managed? How should the task of evaluating Earth system models be conceptualized?
Has the rise of data-intensive science, or ‘big data’, revolutionized our ability to predict? Does it imply a new priority for prediction over causal understanding, and a diminished role for theory and human experts? I examine four important cases where prediction is desirable: political elections, the weather, GDP, and the results of interventions suggested by economic experiments. These cases suggest caution. Although big data methods are indeed very useful sometimes, in this paper’s cases they improve predictions either limitedly or not (...) at all, and their prospects of doing so in the future are limited too. (shrink)
I evaluate Schurz's proposed meta-inductive justification of induction, a refinement of Reichenbach's pragmatic justification that rests on results from the machine learning branch of prediction with expert advice. My conclusion is that the argument, suitably explicated, comes remarkably close to its grand aim: an actual justification of induction. This finding, however, is subject to two main qualifications, and still disregards one important challenge. The first qualification concerns the empirical success of induction. Even though, I argue, Schurz's argument does not need (...) to spell out what inductive method actually consists in, it does need to postulate that there is something like the inductive or scientific prediction strategy that has so far been significantly more successful than alternative approaches. The second qualification concerns the difference between having a justification for inductive method and for sticking with induction for now. Schurz's argument can only provide the latter. Finally, the remaining challenge concerns the pool of alternative strategies, and the relevant notion of a meta-inductivist's optimality that features in the analytic step of Schurz's argument. Building on the work done here, I will argue in a follow-up paper that the argument needs a stronger dynamic notion of a meta-inductivist's optimality. (shrink)
We identify several ongoing debates related to implicit measures, surveying prominent views and considerations in each debate. First, we summarize the debate regarding whether performance on implicit measures is explained by conscious or unconscious representations. Second, we discuss the cognitive structure of the operative constructs: are they associatively or propositionally structured? Third, we review debates whether performance on implicit measures reflects traits or states. Fourth, we discuss the question of whether a person’s performance on an implicit measure reflects characteristics of (...) the person who is taking the test or characteristics of the situation in which the person is taking the test. Finally, we survey the debate about the relationship between implicit measures and (other kinds of) behavior. (shrink)
Prediction is an important aspect of scientific practice, because it helps us to confirm theories and effectively intervene on the systems we are investigating. In ecology, prediction is a controversial topic: even though the number of papers focusing on prediction is constantly increasing, many ecologists believe that the quality of ecological predictions is unacceptably low, in the sense that they are not sufficiently accurate sufficiently often. Moreover, ecologists disagree on how predictions can be improved. On one side are the ‘theory-driven’ (...) ecologists, those who believe that ecology lacks a sufficiently strong theoretical framework. For them, more general theories will yield more accurate predictions. On the other are the ‘applied’ ecologists, whose research is focused on effective interventions on ecological systems. For them, deeper knowledge of the system in question is more important than background theory. The aim of this paper is to provide a philosophical examination of both sides of the debate: as there are strengths and weaknesses in both approaches to prediction, a pluralistic approach is best for the future of predictive ecology. (shrink)
The cosmological relevance of emptiness—that is, space without bodies—is not yet sufficiently appreciated in natural philosophy. This paper addresses two aspects of cosmic emptiness from the perspective of natural philosophy: the distances to the stars in the closer cosmic environment and the expansion of space as a result of the accelerated expansion of the universe. Both aspects will be discussed from both a historical and a systematic perspective. Emptiness can be interpreted as “coming” in a two-fold sense: whereas in the (...) past, knowledge of emptiness, as it were, came to human beings, in the future, it is coming, insofar as its relevance in the cosmos will increase. The longer and more closely emptiness was studied since the beginning of modernity, the larger became the spaces over which it was found to extend. From a systematic perspective, I will show with regard to the closer cosmic environment that the Earth may be separated from the perhaps habitable planets of other stars by an emptiness that is inimical to life and cannot be traversed by humans. This assumption is a result of the discussion of the constraints and possibilities of interstellar space travel as defined by the known natural laws and technical means. With the accelerated expansion of the universe, the distances to other galaxies are increasing. According to the current standard model of cosmology and assuming that the acceleration will remain constant, in the distant future, this expansion will lead first to a substantial change in the epistemic conditions of cosmological knowledge and finally to the completion of the cosmic emptiness and of its relevance, respectively. Imagining the postulated completely empty last state leads human thought to the very limits of what is conceivable. (shrink)
How can we predict and explain the phenomena of nature? What are the limits to this knowledge process? The central issues of prediction, explanation, and mathematical modeling, which underlie all scientific activity, were the focus of a conference organized by the Swedish Council for the Planning and Coordination of Research, held at the Abisko Research Station in May of 1989. At this forum, a select group of internationally known scientists in physics, chemistry, biology, economics, sociology and mathematics discussed and debated (...) the ways in which prediction and explanation interact with mathematical modeling in their respective areas of expertise. Beyond Belief is the result of this forum, consisting of 11 chapters written specifically for this volume. The multiple themes of randomness, uncertainty, prediction and explanation are presented using (as vehicles) several topical areas from modern science, such as morphogenetic fields, Boscovich covariance, and atmospheric variability. This multidisciplinary examination of the foundational issues of modern scientific thought and methodology will offer stimulating reading for a very broad scientific audience. (shrink)
Recently, Luk mentioned that scientific knowledge both explains and predicts. Do these two functions of scientific knowledge have equal significance, or is one of the two functions more important than the other? This commentary explains why prediction may be mandatory but explanation may be only desirable and optional.
The paper will compare two methods used in the design of diagnostic strategies. The first one is a method that precises predictive value of diagnostic tests. The second one is based on the use of Bayes’ theorem. The main aim of this article is to identify the epistemological assumptions underlying both of these methods. For the purpose of this objective, example projects of one and multi-stage diagnostic strategy developed using both methods will be considered.
I critically examine Stewart’s suggestion that we should weigh the various predictions Mendeleev made differently. I argue that in his effort to justify discounting the weight of some of Mendeleev’s failures, Stewart invokes a principle that will, in turn, reduce the weight of some of the successful predictions Mendeleev made. So Stewart’s strategy will not necessarily lead to a net gain in Mendeleev’s favor.
I critically examine Stewart’s suggestion that we should weigh the various predictions Mendeleev made differently. I argue that in his effort to justify discounting the weight of some of Mendeleev’s failures, Stewart invokes a principle that will, in turn, reduce the weight of some of the successful predictions Mendeleev made. So Stewart’s strategy will not necessarily lead to a net gain in Mendeleev’s favor.
Several authors have claimed that prediction is essentially impossible in the general theory of relativity, the case being particularly strong, it is said, when one fully considers the epistemic predicament of the observer. Each of these claims rests on the support of an underdetermination argument and a particular interpretation of the concept of prediction. I argue that these underdetermination arguments fail and depend on an implausible explication of prediction in the theory. The technical results adduced in these arguments can be (...) related to certain epistemic issues, but can only be misleadingly or mistakenly characterized as related to prediction. (shrink)
Hypothesizing after the results are known, or HARKing, occurs when researchers check their research results and then add or remove hypotheses on the basis of those results without acknowledging this process in their research report (Kerr, 1998). In the present article, I discuss three forms of HARKing: (1) using current results to construct post hoc hypotheses that are then reported as if they were a priori hypotheses; (2) retrieving hypotheses from a post hoc literature search and reporting them as a (...) priori hypotheses; and (3) failing to report a priori hypotheses that are unsupported by the current results. These three types of HARKing are often characterized as being bad for science and a potential cause of the current replication crisis. In the present article, I use insights from the philosophy of science to present a more nuanced view. Specifically, I identify the conditions under which each of these three types of HARKing is most and least likely to be bad for science. I conclude with a brief discussion about the ethics of each type of HARKing. (shrink)
Accurate estimation of risk and benefit is integral to good clinical research planning, ethical review, and study implementation. Some commentators have argued that various actors in clinical research systems are prone to biased or arbitrary risk/benefit estimation. In this commentary, we suggest the evidence supporting such claims is very limited. Most prior work has imputed risk/benefit beliefs based on past behavior or goals, rather than directly measuring them. We describe an approach – forecast analysis – that would enable direct and (...) effective measure of the quality of risk/benefit estimation. We then consider some objections and limitations to the forecasting approach. (shrink)
In their critique of Klein (2014a), Trafimow and Earp present two theses. First, they argue that, contra Klein, a well-specified theory is not a necessary condition for successful replication. Second, they contend that even when there is a well-specified theory, replication depends more on auxiliary assumptions than on theory proper. I take issue with both claims, arguing that (a) their first thesis confuses a material conditional (what I said) with a modal claim (T&E’s misreading of what I said), and (b) (...) their second thesis has the unfortunate consequence of refuting their first thesis. (shrink)
Climate models don’t give us probabilistic forecasts. To interpret their results, alternatively, as serious possibilities seems problematic inasmuch as climate models rely on contrary-to-fact assumptions: why should we consider their implications as possible if their assumptions are known to be false? The paper explores a way to address this possibilistic challenge. It introduces the concepts of a perfect and of an imperfect credible world, and discusses whether climate models can be interpreted as imperfect credible worlds. That would allow one to (...) use models for possibilistic prediction and salvage widespread scientific practice. (shrink)
With the ascent of modern epidemiology in the Twentieth Century came a new standard model of prediction in public health and clinical medicine. In this article, we describe the structure of the model. The standard model uses epidemiological measures-most commonly, risk measures-to predict outcomes (prognosis) and effect sizes (treatment) in a patient population that can then be transformed into probabilities for individual patients. In the first step, a risk measure in a study population is generalized or extrapolated to a target (...) population. In the second step, the risk measure is particularized or transformed to yield probabilistic information relevant to a patient from the target population. Hence, we call the approach the Risk Generalization-Particularization (Risk GP) Model. There are serious problems at both stages, especially with the extent to which the required assumptions will hold and the extent to which we have evidence for the assumptions. Given that there are other models of prediction that use different assumptions, we should not inflexibly commit ourselves to one standard model. Instead, model pluralism should be standard in medical prediction. (shrink)
In the methodology of scientific research programs (MSRP) there are important features on the problem of prediction, especially regarding novel facts. In his approach, Imre Lakatos proposed three different levels on prediction: aim, process, and assessment. Chapter 5 pays attention to the characterization of prediction in the methodology of research programs. Thus, it takes into account several features: (1) its pragmatic characterization, (2) the logical perspective as a proposition, (3) the epistemological component, (4) its role in the appraisal of research (...) programs, and (5) its place as a value for scientific research. -/- The notion of “novel facts” is highly relevant in his conception, where several aspects are involved: the directions of novel facts, the different kinds of novelty, and the transition from six possible options of “novel facts” to four choices. Thereafter, the prediction of novel facts as the criterion of appraisal is considered. On the one hand, this requires analyzing the theoretical, empirical, and heuristic possibilities of appraisal; and, on the other, whether there is an overemphasis on the role of prediction in methodology of scientific research programs. As a consequence, there is an analysis of Lakatos’ criterion of appraisal in MSRP and economics. (shrink)
All major research ethics policies assert that the ethical review of clinical trial protocols should include a systematic assessment of risks and benefits. But despite this policy, protocols do not typically contain explicit probability statements about the likely risks or benefits involved in the proposed research. In this essay, I articulate a range of ethical and epistemic advantages that explicit forecasting would offer to the health research enterprise. I then consider how some particular confidence levels may come into conflict with (...) the principles of ethical research. (shrink)
From the early-1950s on, F.A. Hayek was concerned with the development of a methodology of sciences that study systems of complex phenomena. Hayek argued that the knowledge that can be acquired about such systems is, in virtue of their complexity (and the comparatively narrow boundaries of human cognitive faculties), relatively limited. The paper aims to elucidate the implications of Hayek’s methodology with respect to the specific dimensions along which the scientist’s knowledge of some complex phenomena may be limited. Hayek’s fallibilism (...) was an essential (if not always explicit) aspect of his arguments against the defenders of both socialism ( 1948,  1948) and countercyclical monetary policy ( 1978); yet, despite the fact that his conceptions of both complex phenomena and the methodology appropriate to their investigation imply that ignorance might beset the scientist in multiple respects, he never explicated all of these consequences. The specificity of a scientific prediction depends on the extent of the scientist’s knowledge concerning the phenomena under investigation. The paper offers an account of the considerations that determine the extent to which a theory’s implications prohibit the occurrence of particular events in the relevant domain. This theory of “predictive degree” both expresses and – as the phenomena of scientific prediction are themselves complex in Hayek’s sense – exemplifies the intuition that the specificity of a scientific prediction depends on the relevant knowledge available. (shrink)
There is a middle ground of imperfect knowledge in fields like medicine and the social sciences. It stands between our day-to-day relatively certain knowledge obtained from ordinary basic observation of regularities in our world and our knowledge from well-validated theories in the physical sciences. -/- The latter enable reliable prediction a great deal of the time of the happening of events never before experienced. The former enable prediction only of what has happened before and beyond that of educated guesses which (...) may sometimes prove right and others not when we test them. -/- The imperfection of our knowledge between those limits is a consequence of complexity. -/- Reductionist empiricism fails when faced with complexity but we all still have to live in a complex world where reductionist science cannot help us devise reliable theories from which we can predict the behaviour of the world around us. We cannot predict reliably. We can only monitor actively and manage actively the world around us if we want to try to control our environment. We have a limited prediction horizon. -/- Science and empiricism work well if we want to send Voyager I and II over 3 billion miles away but not for most other things. -/- In medicine how and when to apply probabilistic, conjectural, incomplete medical theories and explanations requires professional expertise, intuition and judgement (non scientific knowledge) which is essential. Medical diagnosis is a skill of predicting from knowledge of what has happened before that the past will recur in the current patient applying expertise and intuition from knowledge and experience of prior cases and probabilistic medical research and theory. The physician is left to an educated guess as to how the future will develop for the particular individual patient the focus of his or her current attention. (shrink)
The paper presents a further articulation and defence of the view on prediction and accommodation that I have proposed earlier. It operates by analysing two accounts of the issue-by Patrick Maher and by Marc Lange-that, at least at first sight, appear to be rivals to my own. Maher claims that the time-order of theory and evidence may be important in terms of degree of confirmation, while that claim is explicitly denied in my account. I argue, however, that when his account (...) is analysed, Maher reveals no scientifically significant way in which the time-order counts, and that indeed his view is in the end best regarded as a less than optimally formulated version of my own. Lange has also responded to Maher by arguing that the apparent relevance of temporal considerations is merely apparent: what is really involved, according to Lange, is whether or not a hypothesis constitutes an "arbitrary conjunction." I argue that Lange's suggestion fails: the correct analysis of his and Maher's examples is that provided by my account. (shrink)
There is considerable disagreement about the epistemic value of novel predictive success, i.e. when a scientist predicts an unexpected phenomenon, experiments are conducted, and the prediction proves to be accurate. We survey the field on this question, noting both fully articulated views such as weak and strong predictivism, and more nascent views, such as pluralist reasons for the instrumental value of prediction. By examining the various reasons offered for the value of prediction across a range of inferential contexts , we (...) can see that neither weak nor strong predictivism captures all of the reasons for valuing prediction available. A third path is presented, Pluralist Instrumental Predictivism; PIP for short. (shrink)
It is taken for granted that the explanation of the Universe’s space-time dimension belongs to the host of the arguments that exhibit the superiority of modern (inflationary) cosmology over the standard model. In the present paper some doubts are expressed . They are based upon the fact superstring theory is too formal to represent genuine unification of general relativity and quantum field theory. Neveretheless, the fact cannot exclude the opportunity that in future the superstring theory can become more physical. Hence (...) this paper does not aim to query neither string cosmology, nor superstring theory; it asks for “tolerance in the matters cosmological”. It advices the researchers not to dwell on the common way of unification and to take into consideration the other ways as well. (shrink)
The thesis of theory-ladenness of observations, in its various guises, is widely considered as either ill-conceived or harmless to the rationality of science. The latter view rests partly on the work of the proponents of New Experimentalism who have argued, among other things, that experimental practices are efficient in guarding against any epistemological threat posed by theory-ladenness. In this paper I show that one can generate a thesis of theory-ladenness for experimental practices from an influential New Experimentalist account. The notion (...) I introduce for this purpose is the concept of ‘theory-driven data reliability judgments’, according to which theories which are sought to be tested with a particular set of data guide reliability judgments about those very same data. I provide various prominent historical examples to show that TDRs are used by scientists to resolve data conflicts. I argue that the rationality of the practices which employ TDRs can be saved if the independent support of the theories driving TDRs is construed in a particular way. (shrink)
There are different kinds of uncertainty. I outline some of the various ways that uncertainty enters science, focusing on uncertainty in climate science and weather prediction. I then show how we cope with some of these sources of error through sophisticated modelling techniques. I show how we maintain confidence in the face of error.
Imagine putting together a jigsaw puzzle that works like the board game in the movie “Jumanji”: When you finish, whatever the puzzle portrays becomes real. The children playing “Jumanji” learn to prepare for the reality that emerges from the next throw of the dice. But how would this work for the puzzle of scientific research? How do you prepare for unlocking the secrets of the atom, or assembling from the bottom-up nanotechnologies with unforeseen properties – especially when completion of such (...) puzzles lies decades after the first scattered pieces are tentatively assembled? In the inaugural issue of this journal, Michael Polanyi argued that because the progress of science is unpredictable, society must only move forward with solving the puzzle until the picture completes itself. Decades earlier, Frederick Soddy argued that once the potential for danger reveals itself, one must reorient the whole of one’s work to avoid it. While both scientists stake out extreme positions, Soddy’s approach – together with the action taken by the like-minded Leo Szilard – provides a foundation for the anticipatory governance of emerging technologies. This paper narrates the intertwining stories of Polanyi, Soddy and Szilard, revealing how anticipation influenced governance in the case of atomic weapons and how Polanyi’s claim in “The Republic of Science” of an unpredictable and hence ungovernable science is faulty on multiple levels. (shrink)
ArgumentThis paper investigates whether there is a discrepancy between stated and actual aims in biomechanical research, particularly with respect to hypothesis testing. We present an analysis of one hundred papers recently published inThe Journal of Experimental BiologyandJournal of Biomechanics, and examine the prevalence of papers which have hypothesis testing as a stated aim, contain hypothesis testing claims that appear to be purely presentational, and have exploration as a stated aim. We found that whereas no papers had exploration as a stated (...) aim, 58 per cent of papers had hypothesis testing as a stated aim. We had strong suspicions, at the bare minimum, that presentational hypotheses were present in 31 per cent of the papers in this latter group. (shrink)
Philosophy can shed light on mathematical modeling and the juxtaposition of modeling and empirical data. This paper explores three philosophical traditions of the structure of scientific theory—Syntactic, Semantic, and Pragmatic—to show that each illuminates mathematical modeling. The Pragmatic View identifies four critical functions of mathematical modeling: (1) unification of both models and data, (2) model fitting to data, (3) mechanism identification accounting for observation, and (4) prediction of future observations. Such facets are explored using a recent exchange between two groups (...) of mathematical modelers in plant biology. Scientific debate can arise from different modeling philosophies. (shrink)