Results for 'Statistical hypothesis testing'

1000+ found
Order:
  1.  5
    Null-hypothesis tests are not completely stupid, but bayesian statistics are better.David Rindskopf - 1998 - Behavioral and Brain Sciences 21 (2):215-216.
    Unfortunately, reading Chow's work is likely to leave the reader more confused than enlightened. My preferred solutions to the “controversy” about null- hypothesis testing are: (1) recognize that we really want to test the hypothesis that an effect is “small,” not null, and (2) use Bayesian methods, which are much more in keeping with the way humans naturally think than are classical statistical methods.
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark  
  2.  11
    Sound and relatively complete belief Hoare logic for statistical hypothesis testing programs.Yusuke Kawamoto, Tetsuya Sato & Kohei Suenaga - 2024 - Artificial Intelligence 326 (C):104045.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  33
    Hypothesis-Testing Demands Trustworthy Data—A Simulation Approach to Inferential Statistics Advocating the Research Program Strategy.Antonia Krefeld-Schwalb, Erich H. Witte & Frank Zenker - 2018 - Frontiers in Psychology 9.
  4. Hypothesis testing in statistics.G. Casella & R. Berger - 2001 - In Neil J. Smelser & Paul B. Baltes (eds.), International Encyclopedia of the Social and Behavioral Sciences. Elsevier. pp. 7118--7121.
     
    Export citation  
     
    Bookmark  
  5.  3
    Parameter estimation or hypothesis testing in the statistical analysis of biological rhythms?Ernst PÖppel - 1975 - Bulletin of the Psychonomic Society 6 (5):511-512.
  6.  43
    Hypothesis Testing as a Moral Choice.David J. Pittenger - 2001 - Ethics and Behavior 11 (2):151-162.
    Although many researchers may perceive empirical hypothesis testing using inferential statistics to be a value free process, I argue that any conclusion based on inferential statistics contains an important and intractable value judgment. Consequently, I conclude that researchers should use the same rationale for examining the ethical ramifications of committing errors in statistical inference that they use to examine the ethical parameters of a proposed research design.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  7. Logically-consistent hypothesis testing and the hexagon of oppositions.Julio Michael Stern, Rafael Izbicki, Luis Gustavo Esteves & Rafael Bassi Stern - 2017 - Logic Journal of the IGPL 25 (5):741-757.
    Although logical consistency is desirable in scientific research, standard statistical hypothesis tests are typically logically inconsistent. To address this issue, previous work introduced agnostic hypothesis tests and proved that they can be logically consistent while retaining statistical optimality properties. This article characterizes the credal modalities in agnostic hypothesis tests and uses the hexagon of oppositions to explain the logical relations between these modalities. Geometric solids that are composed of hexagons of oppositions illustrate the conditions for (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  8. The Logical Consistency of Simultaneous Agnostic Hypothesis Tests.Julio Michael Stern - 2016 - Entropy 8 (256):1-22.
    Simultaneous hypothesis tests can fail to provide results that meet logical requirements. For example, if A and B are two statements such that A implies B, there exist tests that, based on the same data, reject B but not A. Such outcomes are generally inconvenient to statisticians (who want to communicate the results to practitioners in a simple fashion) and non-statisticians (confused by conflicting pieces of information). Based on this inconvenience, one might want to use tests that satisfy logical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  9. Hypothesis Testing, “Dutch Book” Arguments, and Risk.Daniel Malinsky - 2015 - Philosophy of Science 82 (5):917-929.
    “Dutch Book” arguments and references to gambling theorems are typical in the debate between Bayesians and scientists committed to “classical” statistical methods. These arguments have rarely convinced non-Bayesian scientists to abandon certain conventional practices, partially because many scientists feel that gambling theorems have little relevance to their research activities. In other words, scientists “don’t bet.” This article examines one attempt, by Schervish, Seidenfeld, and Kadane, to progress beyond such apparent stalemates by connecting “Dutch Book”–type mathematical results with principles actually (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  10.  7
    The role of hypothesis testing in the molding of econometric models.Kevin D. Hoover - 2013 - Erasmus Journal for Philosophy and Economics 6 (2):43.
    This paper addresses the role of specification tests in the selection of a statistically admissible model used to evaluate economic hypotheses. The issue is formulated in the context of recent philosophical accounts on the nature of models and related to some results in the literature on specification search. In contrast to enumerative induction and a priori theory, powerful search methodologies are often adequate substitutes for experimental methods. They underwrite and support, rather than distort, statistical hypothesis tests. Their success (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11. A new paradigm for hypothesis testing in medicine, with examination of the Neyman Pearson condition.G. William Moore, Grover M. Hutchins & Robert E. Miller - 1986 - Theoretical Medicine and Bioethics 7 (3).
    In the past, hypothesis testing in medicine has employed the paradigm of the repeatable experiment. In statistical hypothesis testing, an unbiased sample is drawn from a larger source population, and a calculated statistic is compared to a preassigned critical region, on the assumption that the comparison could be repeated an indefinite number of times. However, repeated experiments often cannot be performed on human beings, due to ethical or economic constraints. We describe a new paradigm for (...)
     
    Export citation  
     
    Bookmark  
  12.  13
    Neyman-Pearson Hypothesis Testing, Epistemic Reliability and Pragmatic Value-Laden Asymmetric Error Risks.Adam P. Kubiak, Paweł Kawalec & Adam Kiersztyn - 2022 - Axiomathes 32 (4):585-604.
    We show that if among the tested hypotheses the number of true hypotheses is not equal to the number of false hypotheses, then Neyman-Pearson theory of testing hypotheses does not warrant minimal epistemic reliability. We also argue that N-P does not protect from the possible negative effects of the pragmatic value-laden unequal setting of error probabilities on N-P’s epistemic reliability. Most importantly, we argue that in the case of a negative impact no methodological adjustment is available to neutralize it, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  13.  29
    Statistical significance testing, hypothetico-deductive method, and theory evaluation.Brian D. Haig - 2000 - Behavioral and Brain Sciences 23 (2):292-293.
    Chow's endorsement of a limited role for null hypothesis significance testing is a needed corrective of research malpractice, but his decision to place this procedure in a hypothetico-deductive framework of Popperian cast is unwise. Various failures of this version of the hypothetico-deductive method have negative implications for Chow's treatment of significance testing, meta-analysis, and theory evaluation.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  14.  11
    Parameter estimation vs. hypothesis testing.M. I. Charles E. Woodson - 1969 - Philosophy of Science 36 (2):203-204.
    Professor Meehl [2] has pointed out a very significant problem in the methodology of psychological research, indicating that statistical tests of psychological hypotheses against a null hypothesis are loaded in favor of eventual success at rejecting the null hypothesis. In my opinion this is not, however, a contrast between physics and psychology, but rather between the method of parameter estimation and that of the null hypothesis in the tradition of Fisher. A physicist could use the null (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15. The use and limitations of null-model-based hypothesis testing.Mingjun Zhang - 2020 - Biology and Philosophy 35 (2):1-22.
    In this article I give a critical evaluation of the use and limitations of null-model-based hypothesis testing as a research strategy in the biological sciences. According to this strategy, the null model based on a randomization procedure provides an appropriate null hypothesis stating that the existence of a pattern is the result of random processes or can be expected by chance alone, and proponents of other hypotheses should first try to reject this null hypothesis in order (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  16.  55
    Misalignment Between Research Hypotheses and Statistical Hypotheses: A Threat to Evidence-Based Medicine?Insa Lawler & Georg Zimmermann - 2019 - Topoi 40 (2):307-318.
    Evidence-based medicine frequently uses statistical hypothesis testing. In this paradigm, data can only disconfirm a research hypothesis’ competitors: One tests the negation of a statistical hypothesis that is supposed to correspond to the research hypothesis. In practice, these hypotheses are often misaligned. For instance, directional research hypotheses are often paired with non-directional statistical hypotheses. Prima facie, one cannot gain proper evidence for one’s research hypothesis employing a misaligned statistical hypothesis. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  17.  9
    Null hypothesis statistical testing and the balance between positive and negative approaches.Adam S. Goodie - 2004 - Behavioral and Brain Sciences 27 (3):338-339.
    Several of Krueger & Funder's (K&F's) suggestions may promote more balanced social cognition research, but reconsidered null hypothesis statistical testing (NHST) is not one of them. Although NHST has primarily supported negative conclusions, this is simply because most conclusions have been negative. NHST can support positive, negative, and even balanced conclusions. Better NHST practices would benefit psychology, but would not alter the balance between positive and negative approaches.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  18.  6
    Asking questions in biology: a guide to hypothesis testing, experimental design and presentation in practical work and research projects.C. J. Barnard - 2011 - New York: Pearson. Edited by Francis S. Gilbert & Peter K. McGregor.
    Asking and answering questions is the cornerstone of science yet formal training in understanding this key process is often overlooked. "Asking Questions in Biology" unpacks this crucial process of enquiry, from a biological perspective, at its various stages. It begins with an overview of scientific question-asking in general, before moving on to demonstrate how to derive hypotheses from unstructured observations. It then explains in the main sections of the book, how to use statistical tests as tools to analyse data (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  19.  22
    Testing a precise null hypothesis: the case of Lindley’s paradox.Jan Sprenger - 2013 - Philosophy of Science 80 (5):733-744.
    The interpretation of tests of a point null hypothesis against an unspecified alternative is a classical and yet unresolved issue in statistical methodology. This paper approaches the problem from the perspective of Lindley's Paradox: the divergence of Bayesian and frequentist inference in hypothesis tests with large sample size. I contend that the standard approaches in both frameworks fail to resolve the paradox. As an alternative, I suggest the Bayesian Reference Criterion: it targets the predictive performance of the (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  20.  8
    Précis of statistical significance: Rationale, validity, and utility.Siu L. Chow - 1998 - Behavioral and Brain Sciences 21 (2):169-194.
    The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null (...) is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the apriori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics. (shrink)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  21.  7
    Costs and benefits of statistical significance tests.Michael G. Shafto - 1998 - Behavioral and Brain Sciences 21 (2):218-219.
    Chow's book provides a thorough analysis of the confusing array of issues surrounding conventional tests of statistical significance. This book should be required reading for behavioral and social scientists. Chow concludes that the null-hypothesis significance-testing procedure (NHSTP) plays a limited, but necessary, role in the experimental sciences. Another possibility is that – owing in part to its metaphorical underpinnings and convoluted logic – the NHSTP is declining in importance in those few sciences in which it ever played (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark  
  22.  7
    Optimizing α for better statistical decisions: A case study involving the pace‐of‐life syndrome hypothesis.Joseph F. Mudge, Faith M. Penny & Jeff E. Houlahan - 2012 - Bioessays 34 (12):1045-1049.
    Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well‐considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re‐assess conclusions reached by three recently published tests of the pace‐of‐life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. Comment on Gignac and Zajenkowski, “The Dunning-Kruger effect is (mostly) a statistical artefact: Valid approaches to testing the hypothesis with individual differences data”.Avram Hiller - 2023 - Intelligence 97 (March-April):101732.
    Gignac and Zajenkowski (2020) find that “the degree to which people mispredicted their objectively measured intelligence was equal across the whole spectrum of objectively measured intelligence”. This Comment shows that Gignac and Zajenkowski’s (2020) finding of homoscedasticity is likely the result of a recoding choice by the experimenters and does not in fact indicate that the Dunning-Kruger Effect is a mere statistical artifact. Specifically, Gignac and Zajenkowski (2020) recoded test subjects’ responses to a question regarding self-assessed comparative IQ onto (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24.  95
    Significance Testing with No Alternative Hypothesis: A Measure of Surprise.J. V. Howard - 2009 - Erkenntnis 70 (2):253-270.
    A pure significance test would check the agreement of a statistical model with the observed data even when no alternative model was available. The paper proposes the use of a modified p -value to make such a test. The model will be rejected if something surprising is observed. It is shown that the relation between this measure of surprise and the surprise indices of Weaver and Good is similar to the relationship between a p -value, a corresponding odds-ratio, and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  25. Cointegration: Bayesian Significance Test Communications in Statistics.Julio Michael Stern, Marcio Alves Diniz & Carlos Alberto de Braganca Pereira - 2012 - Communications in Statistics 41 (19):3562-3574.
    To estimate causal relationships, time series econometricians must be aware of spurious correlation, a problem first mentioned by Yule (1926). To deal with this problem, one can work either with differenced series or multivariate models: VAR (VEC or VECM) models. These models usually include at least one cointegration relation. Although the Bayesian literature on VAR/VEC is quite advanced, Bauwens et al. (1999) highlighted that “the topic of selecting the cointegrating rank has not yet given very useful and convincing results”. The (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  26.  11
    Intelligent design and mathematical statistics: A troubled alliance.Peter Olofsson - 2008 - Biology and Philosophy 23 (4):545-553.
    The explanatory filter is a proposed method to detect design in nature with the aim of refuting Darwinian evolution. The explanatory filter borrows its logical structure from the theory of statistical hypothesis testing but we argue that, when viewed within this context, the filter runs into serious trouble in any interesting biological application. Although the explanatory filter has been extensively criticized from many angles, we present the first rigorous criticism based on the theory of mathematical statistics.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  27.  5
    Problems With Null Hypothesis Significance Testing (NHST): What Do the Textbooks Say?George A. Morgan - unknown
    The first of 3 objectives in this study was to address the major problem with Null Hypothesis Significance Testing (NHST) and 2 common misconceptions related to NHST that cause confusion for students and researchers. The misconcep- tions are (a) a smaller p indicates a stronger relationship and (b) statistical signifi- cance indicates practical importance. The second objective was to determine how this problem and the misconceptions were treated in 12 recent textbooks used in edu- cation research methods (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  28.  2
    Minimum message length and statistically consistent invariant (objective?) Bayesian probabilistic inference—from (medical) “evidence”.David L. Dowe - 2008 - Social Epistemology 22 (4):433 – 460.
    “Evidence” in the form of data collected and analysis thereof is fundamental to medicine, health and science. In this paper, we discuss the “evidence-based” aspect of evidence-based medicine in terms of statistical inference, acknowledging that this latter field of statistical inference often also goes by various near-synonymous names—such as inductive inference (amongst philosophers), econometrics (amongst economists), machine learning (amongst computer scientists) and, in more recent times, data mining (in some circles). Three central issues to this discussion of “evidence-based” (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Paraconsistent Sensitivity Analysis for Bayesian Significance Tests.Julio Michael Stern - 2004 - Lecture Notes in Artificial Intelligence 3171:134-143.
    In this paper, the notion of degree of inconsistency is introduced as a tool to evaluate the sensitivity of the Full Bayesian Significance Test (FBST) value of evidence with respect to changes in the prior or reference density. For that, both the definition of the FBST, a possibilistic approach to hypothesis testing based on Bayesian probability procedures, and the use of bilattice structures, as introduced by Ginsberg and Fitting, in paraconsistent logics, are reviewed. The computational and theoretical advantages (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  30.  5
    Unraveling Temporal Dynamics of Multidimensional Statistical Learning in Implicit and Explicit Systems: An X‐Way Hypothesis.Stephen Man-Kit Lee, Nicole Sin Hang Law & Shelley Xiuli Tong - 2024 - Cognitive Science 48 (4):e13437.
    Statistical learning enables humans to involuntarily process and utilize different kinds of patterns from the environment. However, the cognitive mechanisms underlying the simultaneous acquisition of multiple regularities from different perceptual modalities remain unclear. A novel multidimensional serial reaction time task was developed to test 40 participants’ ability to learn simple first‐order and complex second‐order relations between uni‐modal visual and cross‐modal audio‐visual stimuli. Using the difference in reaction times between sequenced and random stimuli as the index of domain‐general statistical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. Pragmatic Hypotheses in the Evolution of Science.Julio Michael Stern, Luis Gustavo Esteves, Rafael Izbicki & Rafael Stern - 2019 - Entropy 21 (9):1-17.
    This paper introduces pragmatic hypotheses and relates this concept to the spiral of scientific evolution. Previous works determined a characterization of logically consistent statistical hypothesis tests and showed that the modal operators obtained from this test can be represented in the hexagon of oppositions. However, despite the importance of precise hypothesis in science, they cannot be accepted by logically consistent tests. Here, we show that this dilemma can be overcome by the use of pragmatic versions of precise (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  32.  12
    An implementation of statistical default logic.Gregory Wheeler & Carlos Damasio - 2004 - In Jose Alferes & Joao Leite (eds.), Logics in Artificial Intelligence (JELIA 2004). Springer.
    Statistical Default Logic (SDL) is an expansion of classical (i.e., Reiter) default logic that allows us to model common inference patterns found in standard inferential statistics, e.g., hypothesis testing and the estimation of a population‘s mean, variance and proportions. This paper presents an embedding of an important subset of SDL theories, called literal statistical default theories, into stable model semantics. The embedding is designed to compute the signature set of literals that uniquely distinguishes each extension on (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  33.  25
    Error-statistical elimination of alternative hypotheses.Kent Staley - 2008 - Synthese 163 (3):397 - 408.
    I consider the error-statistical account as both a theory of evidence and as a theory of inference. I seek to show how inferences regarding the truth of hypotheses can be upheld by avoiding a certain kind of alternative hypothesis problem. In addition to the testing of assumptions behind the experimental model, I discuss the role of judgments of implausibility. A benefit of my analysis is that it reveals a continuity in the application of error-statistical assessment to (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  34.  21
    Fisher, Neyman-Pearson or NHST? A tutorial for teaching data testing.Jose D. Perezgonzalez - 2015 - Frontiers in Psychology 6:135153.
    Despite frequent calls for the overhaul of null hypothesis significance testing (NHST), this controversial procedure remains ubiquitous in behavioral, social and biomedical teaching and research. Little change seems possible once the procedure becomes well ingrained in the minds and current practice of researchers; thus, the optimal opportunity for such change is at the time the procedure is taught, be this at undergraduate or at postgraduate levels. This paper presents a tutorial for the teaching of data testing procedures, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  35.  22
    Realism versus instrumentalism in a new statistical framework.Gregory M. Mikkelson - 2006 - Philosophy of Science 73 (4):440-447.
    In this paper, I offer a new defense of scientific realism, tailored for the Akaikean paradigm of statistical hypothesis testing. After proposing definitions of verisimilitude and predictive success, I use computer simulations to show how the latter depends on the former, even in the kind of case featured in a recent argument for instrumentalism.
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  36.  32
    On the emergence of minority disadvantage: testing the cultural Red King hypothesis.Aydin Mohseni, Cailin O'Connor & Hannah Rubin - 2019 - Synthese 198 (6):5599-5621.
    The study of social justice asks: what sorts of social arrangements are equitable ones? But also: how do we derive the inequitable arrangements we often observe in human societies? In particular, in spite of explicitly stated equity norms, categorical inequity tends to be the rule rather than the exception. The cultural Red King hypothesis predicts that differentials in group size may lead to inequitable outcomes for minority groups even in the absence of explicit or implicit bias. We test this (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  37.  15
    The Null-hypothesis significance-test procedure is still warranted.Siu L. Chow - 1998 - Behavioral and Brain Sciences 21 (2):228-235.
    Entertaining diverse assumptions about empirical research, commentators give a wide range of verdicts on the NHSTP defence in Statistical significance. The null-hypothesis significance- test procedure is defended in a framework in which deductive and inductive rules are deployed in theory corroboration in the spirit of Popper's Conjectures and refutations. The defensible hypothetico-deductive structure of the framework is used to make explicit the distinctions between substantive and statistical hypotheses, statistical alternative and conceptual alternative hypotheses, and making (...) decisions and drawing theoretical conclusions. These distinctions make it easier to show that H0 can be true, the effect size is irrelevant to theory corroboration, and “strong” hypotheses make no difference to NHSTP. Reservations about statistical power, meta-analysis, and the Bayesian approach are still warranted. (shrink)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  38.  13
    Visual Statistical Learning With Stimuli Presented Sequentially Across Space and Time in Deaf and Hearing Adults.Beatrice Giustolisi & Karen Emmorey - 2018 - Cognitive Science 42 (8):3177-3190.
    This study investigated visual statistical learning (VSL) in 24 deaf signers and 24 hearing non‐signers. Previous research with hearing individuals suggests that SL mechanisms support literacy. Our first goal was to assess whether VSL was associated with reading ability in deaf individuals, and whether this relation was sustained by a link between VSL and sign language skill. Our second goal was to test the Auditory Scaffolding Hypothesis, which makes the prediction that deaf people should be impaired in sequential (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  39.  12
    The Quantitative-Qualitative Distinction and the Null Hypothesis Significance Testing Procedure.Nimal Ratnesar & Jim Mackenzie - 2006 - Journal of Philosophy of Education 40 (4):501-509.
    Conventional discussion of research methodology contrast two approaches, the quantitative and the qualitative, presented as collectively exhaustive. But if qualitative is taken as the understanding of lifeworlds, the two approaches between them cover only a tiny fraction of research methodologies; and the quantitative, taken as the routine application to controlled experiments of frequentist statistics by way of the Null Hypothesis Significance Testing Procedure, is seriously flawed. It is contrary to the advice both of Fisher and of Neyman and (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40.  7
    The quantitative-qualitative distinction and the Null hypothesis significance testing procedure.Nimal Ratnesar & Jim Mackenzie - 2006 - Journal of Philosophy of Education 40 (4):501–509.
    Conventional discussion of research methodology contrast two approaches, the quantitative and the qualitative, presented as collectively exhaustive. But if qualitative is taken as the understanding of lifeworlds, the two approaches between them cover only a tiny fraction of research methodologies; and the quantitative, taken as the routine application to controlled experiments of frequentist statistics by way of the Null Hypothesis Significance Testing Procedure, is seriously flawed. It is contrary to the advice both of Fisher and of Neyman and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  41.  15
    Constrained statistical inference: sample-size tables for ANOVA and regression.Leonard Vanbrabant, Rens Van De Schoot & Yves Rosseel - 2014 - Frontiers in Psychology 5:123036.
    Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient β 1 is larger than β 2 and β 3. The corresponding hypothesis is H : β 1 > {β 2, β 3 } and this is known as an (order) constrained hypothesis. A major advantage of testing such a hypothesis is that power can be (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark  
  42. A Weibull Wearout Test: Full Bayesian Approach.Julio Michael Stern, Telba Zalkind Irony, Marcelo de Souza Lauretto & Carlos Alberto de Braganca Pereira - 2001 - Reliability and Engineering Statistics 5:287-300.
    The Full Bayesian Significance Test (FBST) for precise hypotheses is presented, with some applications relevant to reliability theory. The FBST is an alternative to significance tests or, equivalently, to p-ualue.s. In the FBST we compute the evidence of the precise hypothesis. This evidence is the probability of the complement of a credible set "tangent" to the sub-manifold (of the para,rreter space) that defines the null hypothesis. We use the FBST in an application requiring a quality control of used (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. Testing Scientific Theories Through Validating Computer Models.Michael L. Cohen - 2000 - Dissertation, University of Maryland, College Park
    Attempts by 20th century philosophers of science to define inductive concepts and methods concerning the support provided to scientific theories by empirical data have been unsuccessful. Although 20th century philosophers of science largely ignored statistical methods for testing theories, when they did address them they argued against rather than for their use. In contrast, this study demonstrates that traditional statistical methods used for validating computer simulation models provide tests of the scientific theories that those models may embody. (...)
     
    Export citation  
     
    Bookmark  
  44.  15
    Frequentist statistics as a theory of inductive inference.Deborah G. Mayo & David Cox - 2006 - In Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. Cambridge University Press.
    After some general remarks about the interrelation between philosophical and statistical thinking, the discussion centres largely on significance tests. These are defined as the calculation of p-values rather than as formal procedures for ‘acceptance‘ and ‘rejection‘. A number of types of null hypothesis are described and a principle for evidential interpretation set out governing the implications of p- values in the specific circumstances of each application, as contrasted with a long-run interpretation. A number of more complicated situ- ations (...)
    Direct download  
     
    Export citation  
     
    Bookmark   13 citations  
  45.  17
    An orthodox statistical resolution of the paradox of confirmation.Ronald N. Giere - 1970 - Philosophy of Science 37 (3):354-362.
    Several authors, e.g. Patrick Suppes and I. J. Good, have recently argued that the paradox of confirmation can be resolved within the developing subjective Bayesian account of inductive reasoning. The aim of this paper is to show that the paradox can also be resolved by the rival orthodox account of hypothesis testing currently employed by most statisticians and scientists. The key to the orthodox statistical resolution is the rejection of a generalized version of Hempel's instantiation condition, namely, (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  46. Unit Roots: Bayesian Significance Test.Julio Michael Stern, Marcio Alves Diniz & Carlos Alberto de Braganca Pereira - 2011 - Communications in Statistics 40 (23):4200-4213.
    The unit root problem plays a central role in empirical applications in the time series econometric literature. However, significance tests developed under the frequentist tradition present various conceptual problems that jeopardize the power of these tests, especially for small samples. Bayesian alternatives, although having interesting interpretations and being precisely defined, experience problems due to the fact that that the hypothesis of interest in this case is sharp or precise. The Bayesian significance test used in this article, for the unit (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47.  1
    Justification of functional form assumptions in structural models: applications and testing of qualitative measurement axioms. [REVIEW]John K. Dagsvik & Stine Røine Hoff - 2011 - Theory and Decision 70 (2):215-254.
    In both theoretical and applied modeling in behavioral sciences, it is common to choose a mathematical specification of functional form and distribution of unobservables on grounds of analytic convenience without support from explicit theoretical postulates. This article discusses the issue of deriving particular qualitative hypotheses about functional form restrictions in structural models from intuitive theoretical axioms. In particular, we focus on a family of postulates known as dimensional invariance. Subsequently, we discuss how specific qualitative postulates can be reformulated so as (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  96
    Inconsistent multiple testing corrections: The fallacy of using family-based error rates to make inferences about individual hypotheses.Mark Rubin - 2024 - Methods in Psychology 10.
    During multiple testing, researchers often adjust their alpha level to control the familywise error rate for a statistical inference about a joint union alternative hypothesis (e.g., “H1,1 or H1,2”). However, in some cases, they do not make this inference. Instead, they make separate inferences about each of the individual hypotheses that comprise the joint hypothesis (e.g., H1,1 and H1,2). For example, a researcher might use a Bonferroni correction to adjust their alpha level from the conventional level (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49. Testing the Independence of Poisson Variates under the Holgate Bivariate Distribution: The Power of a New Evidence Test.Julio Michael Stern & Shelemyahu Zacks - 2002 - Statistics and Probability Letters 60:313-320.
    A new Evidence Test is applied to the problem of testing whether two Poisson random variables are dependent. The dependence structure is that of Holgate’s bivariate distribution. These bivariate distribution depends on three parameters, 0 < theta_1, theta_2 < infty, and 0 < theta_3 < min(theta_1, theta_2). The Evidence Test was originally developed as a Bayesian test, but in the present paper it is compared to the best known test of the hypothesis of independence in a frequentist framework. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  50.  18
    Statistical Power and P-values: An Epistemic Interpretation Without Power Approach Paradoxes.Guillaume Rochefort-Maranda - unknown
    It has been claimed that if statistical power and p-values are both used to measure the strength of our evidence for the null-hypothesis when the results of our tests are not significant, then they can also be used to derive inconsistent epistemic judgements as we compare two different experiments. Those problematic derivations are known as power approach paradoxes. The consensus is that we can avoid them if we abandon the idea that statistical power can measure the strength (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 1000