Switch to: References

Add citations

You must login to add citations.
  1. Objective evidence and rules of strategy: Achinstein on method: Peter Achinstein: Evidence and method: Scientific strategies of Isaac Newton and James Clerk Maxwell. Oxford and New York: Oxford University Press, 2013, 177pp, $24.95 HB.William L. Harper, Kent W. Staley, Henk W. de Regt & Peter Achinstein - 2014 - Metascience 23 (3):413-442.
  • Detection of unfaithfulness and robust causal inference.Jiji Zhang & Peter Spirtes - 2008 - Minds and Machines 18 (2):239-271.
    Much of the recent work on the epistemology of causation has centered on two assumptions, known as the Causal Markov Condition and the Causal Faithfulness Condition. Philosophical discussions of the latter condition have exhibited situations in which it is likely to fail. This paper studies the Causal Faithfulness Condition as a conjunction of weaker conditions. We show that some of the weaker conjuncts can be empirically tested, and hence do not have to be assumed a priori. Our results lead to (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   33 citations  
  • Strategies for securing evidence through model criticism.Kent W. Staley - 2012 - European Journal for Philosophy of Science 2 (1):21-43.
    Some accounts of evidence regard it as an objective relationship holding between data and hypotheses, perhaps mediated by a testing procedure. Mayo’s error-statistical theory of evidence is an example of such an approach. Such a view leaves open the question of when an epistemic agent is justified in drawing an inference from such data to a hypothesis. Using Mayo’s account as an illustration, I propose a framework for addressing the justification question via a relativized notion, which I designate security , (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Internalist and externalist aspects of justification in scientific inquiry.Kent Staley & Aaron Cobb - 2011 - Synthese 182 (3):475-492.
    While epistemic justification is a central concern for both contemporary epistemology and philosophy of science, debates in contemporary epistemology about the nature of epistemic justification have not been discussed extensively by philosophers of science. As a step toward a coherent account of scientific justification that is informed by, and sheds light on, justificatory practices in the sciences, this paper examines one of these debates—the internalist-externalist debate—from the perspective of objective accounts of scientific evidence. In particular, we focus on Deborah Mayo’s (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Error-statistical elimination of alternative hypotheses.Kent Staley - 2008 - Synthese 163 (3):397 - 408.
    I consider the error-statistical account as both a theory of evidence and as a theory of inference. I seek to show how inferences regarding the truth of hypotheses can be upheld by avoiding a certain kind of alternative hypothesis problem. In addition to the testing of assumptions behind the experimental model, I discuss the role of judgments of implausibility. A benefit of my analysis is that it reveals a continuity in the application of error-statistical assessment to low-level empirical hypotheses and (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Evidence and Justification in Groups with Conflicting Background Beliefs.Kent W. Staley - 2010 - Episteme 7 (3):232-247.
    Some prominent accounts of scientific evidence treat evidence as an unrelativized concept. But whether belief in a hypothesis is justified seems relative to the epistemic situation of the believer. The issue becomes yet more complicated in the context of group epistemic agents, for then one confronts the problem of relativizing to an epistemic situation that may include conflicting beliefs. As a step toward resolution of these difficulties, an ideal of justification is here proposed that incorporates both an unrelativized evidence requirement (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Science without (parametric) models: the case of bootstrap resampling.Jan Sprenger - 2011 - Synthese 180 (1):65-76.
    Scientific and statistical inferences build heavily on explicit, parametric models, and often with good reasons. However, the limited scope of parametric models and the increasing complexity of the studied systems in modern science raise the risk of model misspecification. Therefore, I examine alternative, data-based inference techniques, such as bootstrap resampling. I argue that their neglect in the philosophical literature is unjustified: they suit some contexts of inquiry much better and use a more direct approach to scientific inference. Moreover, they make (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Who Should Be Afraid of the Jeffreys-Lindley Paradox?Aris Spanos - 2013 - Philosophy of Science 80 (1):73-93.
    The article revisits the large n problem as it relates to the Jeffreys-Lindley paradox to compare the frequentist, Bayesian, and likelihoodist approaches to inference and evidence. It is argued that what is fallacious is to interpret a rejection of as providing the same evidence for a particular alternative, irrespective of n; this is an example of the fallacy of rejection. Moreover, the Bayesian and likelihoodist approaches are shown to be susceptible to the fallacy of acceptance. The key difference is that (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • Revisiting Haavelmo's structural econometrics: bridging the gap between theory and data.Aris Spanos - 2015 - Journal of Economic Methodology 22 (2):171-196.
    The objective of the paper is threefold. First, to argue that some of Haavelmo's methodological ideas and insights have been neglected because they are largely at odds with the traditional perspective that views empirical modeling in economics as an exercise in curve-fitting. Second, to make a case that this neglect has contributed to the unreliability of empirical evidence in economics that is largely due to statistical misspecification. The latter affects the reliability of inference by inducing discrepancies between the actual and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Is frequentist testing vulnerable to the base-rate fallacy?Aris Spanos - 2010 - Philosophy of Science 77 (4):565-583.
    This article calls into question the charge that frequentist testing is susceptible to the base-rate fallacy. It is argued that the apparent similarity between examples like the Harvard Medical School test and frequentist testing is highly misleading. A closer scrutiny reveals that such examples have none of the basic features of a proper frequentist test, such as legitimate data, hypotheses, test statistics, and sampling distributions. Indeed, the relevant error probabilities are replaced with the false positive/negative rates that constitute deductive calculations (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Error statistical modeling and inference: Where methodology meets ontology.Aris Spanos & Deborah G. Mayo - 2015 - Synthese 192 (11):3533-3555.
    In empirical modeling, an important desiderata for deeming theoretical entities and processes as real is that they can be reproducible in a statistical sense. Current day crises regarding replicability in science intertwines with the question of how statistical methods link data to statistical and substantive theories and models. Different answers to this question have important methodological consequences for inference, which are intertwined with a contrast between the ontological commitments of the two types of models. The key to untangling them is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Curve Fitting, the Reliability of Inductive Inference, and the Error‐Statistical Approach.Aris Spanos - 2007 - Philosophy of Science 74 (5):1046-1066.
    The main aim of this paper is to revisit the curve fitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest' curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curve fitting problem is, on the one hand, the undue influence of the mathematical approximation perspective, and on the other, the insufficient attention paid to the statistical modeling aspects of the problem. Using goodness-of-fit as the (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • A frequentist interpretation of probability for model-based inductive inference.Aris Spanos - 2013 - Synthese 190 (9):1555-1585.
    The main objective of the paper is to propose a frequentist interpretation of probability in the context of model-based induction, anchored on the Strong Law of Large Numbers (SLLN) and justifiable on empirical grounds. It is argued that the prevailing views in philosophy of science concerning induction and the frequentist interpretation of probability are unduly influenced by enumerative induction, and the von Mises rendering, both of which are at odds with frequentist model-based induction that dominates current practice. The differences between (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • On the Jeffreys-Lindley Paradox.Christian P. Robert - 2014 - Philosophy of Science 81 (2):216-232,.
    This article discusses the dual interpretation of the Jeffreys-Lindley paradox associated with Bayesian posterior probabilities and Bayes factors, both as a differentiation between frequentist and Bayesian statistics and as a pointer to the difficulty of using improper priors while testing. I stress the considerable impact of this paradox on the foundations of both classical and Bayesian statistics. While assessing existing resolutions of the paradox, I focus on a critical viewpoint of the paradox discussed by Spanos in Philosophy of Science.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Rejoinder error in economics. Towards a more evidence-based methodology , Julian Reiss, Routledge, 2007, XXIV + 246 pages. [REVIEW]Julian Reiss - 2009 - Economics and Philosophy 25 (2):210-215.
  • Computer simulation through an error-statistical lens.Wendy S. Parker - 2008 - Synthese 163 (3):371-384.
    After showing how Deborah Mayo’s error-statistical philosophy of science might be applied to address important questions about the evidential status of computer simulation results, I argue that an error-statistical perspective offers an interesting new way of thinking about computer simulation models and has the potential to significantly improve the practice of simulation model evaluation. Though intended primarily as a contribution to the epistemology of simulation, the analysis also serves to fill in details of Mayo’s epistemology of experiment.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  • The error statistical philosopher as normative naturalist.Deborah Mayo & Jean Miller - 2008 - Synthese 163 (3):305 - 314.
    We argue for a naturalistic account for appraising scientific methods that carries non-trivial normative force. We develop our approach by comparison with Laudan’s (American Philosophical Quarterly 24:19–31, 1987, Philosophy of Science 57:20–33, 1990) “normative naturalism” based on correlating means (various scientific methods) with ends (e.g., reliability). We argue that such a meta-methodology based on means–ends correlations is unreliable and cannot achieve its normative goals. We suggest another approach for meta-methodology based on a conglomeration of tools and strategies (from statistical modeling, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Some surprising facts about surprising facts.D. Mayo - 2014 - Studies in History and Philosophy of Science Part A 45:79-86.
    A common intuition about evidence is that if data x have been used to construct a hypothesis H, then x should not be used again in support of H. It is no surprise that x fits H, if H was deliberately constructed to accord with x. The question of when and why we should avoid such “double-counting” continues to be debated in philosophy and statistics. It arises as a prohibition against data mining, hunting for significance, tuning on the signal, and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Severe testing as a basic concept in a neyman–pearson philosophy of induction.Deborah G. Mayo & Aris Spanos - 2006 - British Journal for the Philosophy of Science 57 (2):323-357.
    Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   62 citations  
  • Philosophical Scrutiny of Evidence of Risks: From Bioethics to Bioevidence.Deborah G. Mayo & Aris Spanos - 2006 - Philosophy of Science 73 (5):803-816.
    We argue that a responsible analysis of today's evidence-based risk assessments and risk debates in biology demands a critical or metascientific scrutiny of the uncertainties, assumptions, and threats of error along the manifold steps in risk analysis. Without an accompanying methodological critique, neither sensitivity to social and ethical values, nor conceptual clarification alone, suffices. In this view, restricting the invitation for philosophical involvement to those wearing a "bioethicist" label precludes the vitally important role philosophers of science may be able to (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • How to discount double-counting when it counts: Some clarifications.Deborah G. Mayo - 2008 - British Journal for the Philosophy of Science 59 (4):857-879.
    The issues of double-counting, use-constructing, and selection effects have long been the subject of debate in the philosophical as well as statistical literature. I have argued that it is the severity, stringency, or probativeness of the test—or lack of it—that should determine if a double-use of data is admissible. Hitchcock and Sober ([2004]) question whether this ‘severity criterion' can perform its intended job. I argue that their criticisms stem from a flawed interpretation of the severity criterion. Taking their criticism as (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • How experimental algorithmics can benefit from Mayo’s extensions to Neyman–Pearson theory of testing.Thomas Bartz-Beielstein - 2008 - Synthese 163 (3):385 - 396.
    Although theoretical results for several algorithms in many application domains were presented during the last decades, not all algorithms can be analyzed fully theoretically. Experimentation is necessary. The analysis of algorithms should follow the same principles and standards of other empirical sciences. This article focuses on stochastic search algorithms, such as evolutionary algorithms or particle swarm optimization. Stochastic search algorithms tackle hard real-world optimization problems, e.g., problems from chemical engineering, airfoil optimization, or bio-informatics, where classical methods from mathematical optimization fail. (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • How experimental algorithmics can benefit from Mayo’s extensions to Neyman–Pearson theory of testing.Thomas Bartz-Beielstein - 2008 - Synthese 163 (3):385-396.
    Although theoretical results for several algorithms in many application domains were presented during the last decades, not all algorithms can be analyzed fully theoretically. Experimentation is necessary. The analysis of algorithms should follow the same principles and standards of other empirical sciences. This article focuses on stochastic search algorithms, such as evolutionary algorithms or particle swarm optimization. Stochastic search algorithms tackle hard real-world optimization problems, e.g., problems from chemical engineering, airfoil optimization, or bioinformatics, where classical methods from mathematical optimization fail. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Tackling Duhemian Problems: An Alternative to Skepticism of Neuroimaging in Philosophy of Cognitive Science.Emrah Aktunc - 2014 - Review of Philosophy and Psychology 5 (4):449-464.
    Duhem’s problem arises especially in scientific contexts where the tools and procedures of measurement and analysis are numerous and complex. Several philosophers of cognitive science have cited its manifestations in fMRI as grounds for skepticism regarding the epistemic value of neuroimaging. To address these Duhemian arguments for skepticism, I offer an alternative approach based on Deborah Mayo’s error-statistical account in which Duhem's problem is more fruitfully approached in terms of error probabilities. This is illustrated in examples such as the use (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation