Skip to main content

OPINION article

Front. Psychol., 13 August 2019
Sec. Quantitative Psychology and Measurement
This article is part of the Research Topic Epistemological and Ethical Aspects of Research in the Social Sciences View all 13 articles

The Rationality of Science and the Inevitability of Defining Prior Beliefs in Empirical Research

  • Department of Cultural Studies and Languages, Faculty of Arts and Education, University of Stavanger, Stavanger, Norway

Introduction

The recent “campaign” in Nature against the concept of “significance testing” (Amrhein et al., 2019), with more than 800 supporting signatories of leading scientists, can be considered as an important milestone and somewhat resounding event in the long on-ongoing struggle and somewhat “silent revolution” (Rodgers, 2010) in statistics over logical, epistemological, and praxeological aspects (Meehl, 1997; Sprenger and Hartmann, 2019), criticizing over-simplified and thoughtless statistical analyses which still can be found in overwhelming many publications to-date. So-called frequentists, the Neyman/Pearson and Fisher schools, and those who apply a hybrid scheme of the two schools (Mayo, 1996) or simple Null Hypothesis Testing (NHST), likelihoodists, and Bayesians alike have debated their approaches over the past decades. This finally led to a discourse facilitated by the American Statistical Association, resulting in a special issue of The American Statistician (Vol. 73/2019) titled: “Statistical Inference in the 21st century: A World Beyond p < 0.05,” with “43 innovative and thought-provoking papers from forward-looking statisticians” (Wasserstein et al., 2019, p. 1). The special issue proposes both new ways to report the importance of research results beyond the arbitrary threshold of a categorical p-value, and some guides of conduct: the researcher should accept uncertainty, be thoughtful, open and modest in their claims (Wasserstein et al., 2019). The future will show if those attempts to statistically better supported science beyond significance testing will be echoed in the publications to come.

A corresponding discourse has been led by the Royal Statistical Society, whereby Andrew Gelman's and Christian Hennig's contribution “Beyond subjective and objective in statistics” has been discussed by more than 50 leading statisticians (Gelman and Hennig, 2017). They suggest to stop using the rather vague terms “objectivity” and “subjectivity,” and replace them with “transparency, consensus, impartiality, and correspondence to observable reality” for the former, and “awareness of multiple perspectives and context dependence” for the latter. Together with “stability,” these should “make up a collection of virtues” that they consider “helpful in discussions of statistical foundations and practice” (Gelman and Hennig, 2017, p. 967).

Yet, questioning the very concept of “objectivity” might be quite provocative and absurd to most empirical scientists who hold “objectivity” to be a central property of observables, or at least to be the property of scientific method that produces pure, value-free facts. In this light, it is interesting to note that both strategies for overcoming the “statistical crisis in science” (Gelman and Loken, 2014) focus on the researchers' conduct and employ moral categories for the ontological and epistemological problem of what we should believe.

In this article, I will stress the importance of epistemic beliefs in science for the methods we employ. For this purpose, I will recall an argument that Hilary Putnam proposed more than 35 years ago in his critique of scientific realism. Putnam's philosophy of science had been discussed by statisticians like Meehl and Cronbach at that time (Fiske and Shweder, 1986), but his ideas have since been overlooked in the above-mentioned discourses. Putnam claims that the concept of rationality, as it is assumed in science, is in fact deeply irrational, if it considers methods to be purely formal, distinct and free from value-judgements. There is also an informal part inherent to rationality in science which depends on the changing beliefs of scientists.

At the core of Putnam's argument lies a fundamental critique of verficationalism with its correspondence theory of truth, which is disguised in the assumption that there are such things as “objective” facts, independent of our “subjective” experiences, thoughts, and language.

The Impact of Science on Modern Conceptions of Rationality

A prominent account of such scientific realism can be found in a later work of John Searle, with whom Putnam fought many philosophical battles (Horowitz, 1996; Cruickshank, 2003).

According to Searle, modern science recurs to “default positions” that are not questioned and “any departure from them requires a conscious effort and a convincing argument.” The most central default position implicit in standard empirical research is that we have direct perceptual access to the world through our senses and that the world exists independently of human observation, which is labeled a “correspondence theory of truth” (Searle, 1999).

Yet, the philosophical cost of such an epistemological stance is high: The underlying ontological assumptions in correspondence theories become increasingly counterintuitive and less understandable with the attenuation of their metaphysical ingredients, requiring ability to position the researcher as having an entirely external “god's eye point of view” (Putnam, 1981, p. 49). In other words: Despite the anti-transcendentalist claim of such positivist sciences, the forms of rationality employed derive upon much more substantial metaphysical assumptions than pragmatist methodologies; yet from increased skepticism, the comprehensibility and commonsensical acceptability of science decreases (Dettweiler, 2015).

Despite Putnam has changed his philosophical ideas throughout his life, one constant theme (at least since the 1970ies) is his pragmatist ontological position, which at many points is neither realistic nor ideal. In his claims that, although the world may be causally independent of the human mind, the structure of the world (both in terms of individuals and categories) is a function of the human mind and hence is not ontologically independent (cf. Brown, 1988). Hereby, Putnam refers to Kant's concept of the dependence of our knowledge of the world on the “categories of thought” and he claims that there is “a fact of the matter as to whether the statements people make are warranted or not” (Putnam, 1981, p. 21, cursive by U.D.). This material, realistic reference allows Putnam to talk about warranted truth that is “independent of whether the majority of one's cultural peers would say it is warranted or unwarranted” (ibid). In this respect, Putnam is more than a mere consensus theorist, but not yet a naturalistic realist. He argues instead that “reason can't be naturalized” (Putnam, 1983) and that here and now “truth is independent of justification…, but not independent of all justification. To claim a statement is true is to claim it could be justified” (Putnam, 1981, p. 56). Or as Cronbach (1986) reframes Putnam: “Realism is an empirical hypothesis … that can be defended if we observe that a science converges (p. 90).

So, the main challenge to empirical science is the implicit refutation of the claim that the world is accessible independently from the interpretation through our senses and language. It is, according to Putnam, conceptually impossible “to draw a sharp line between the content of science and the method of science,” and “the method of science in fact changes constantly as the content of science changes” (Putnam, 1981, p. 191).

“Tuning-free” Does not Mean “Value-free”

This has, or rather should have, direct implications to the understanding of modern science and the statistical framework it is built on. Putnam argues that any scientific methodology needs to take into account the prior beliefs of scientists and the degree of uncertainty of hypotheses. This means, on the other hand, that we scientists need to make explicit those beliefs that are implicit in the methodologies we apply and quantify in some way uncertainty.

This is often an alien thought to scientists who apply frequentist statistics in their data analyses and reject the “use of subjective uncertainty in the context of scientific inquiry” (Sprenger, 2016, p. 382). It is the very idea of frequentist statistics, that in the long run, the underlying procedure leads to a (probably) correct result irrespective of the researchers' beliefs. Yet the convenience of standard statistical programs with its many default settings should not disguise the many choices implicitly made in the simplest statistical operations. Most researchers hardly question why we fit the data into a Gausian model with a uniform distribution on the infinite range for each of the parameters, and a uniform distribution for the error term as well? With the decision to model the data linearly, according to a normal function within an infinite range of possible values, there are already a number of value-driven presuppositions in the model before we even have started entering the data. The rationale behind the uniform prior probability functions used in standard statistical models is, of course, that it contains as little information as possible, in order to make it a “neutral” procedure. But as Gelman and Hennig (2017) argue, “even using ‘no need for tuning' as a criterion for method selection or prioritizing bias, for example, or mean-squared error, is a subjective decision” (p. 971).

There is, as Gelman and Hill (2007) state, nothing wrong with modeling data with uniform distributions on all the parameters. They call those models “reference” models, which provide some important preliminary information in a given data analysis. However, “neutral” does not mean “value-free.” We can conceive of many other distribution functions, with more specified parameters, informed by previous research and representing the researchers' prior beliefs, which might better fit the data.

Bayes theorem does provide us with a statistical framework that tells us how data should change our (subjective) degrees of belief in a hypothesis, within a formal model of rational belief provided by the probability calculus. Bayes theorem states that the posterior distribution, i.e., the probability of the parameters given the data, is proportional to the likelihood, which is the probability of observing the data given the parameters (unknowns) multiplied by the prior probability, which represents external knowledge about the parameters.

In fact, Putnam sees subjective Bayesianism as the statistical framework that can assume a formalized language of science in which reliable observations together with some hypotheses can be rationally expressed.

It is from this point that Gelman and Hennig (2017) initiate their proposal to collapse the dichotomy of objectivity and subjectivity altogether. They demonstrate that those prior probability functions are not so much “subjective degrees of belief” but rather “external information” on a specific research question including “restrictions such as smoothness or sparsity that serve to regularize estimates in high dimensional settings, … the choice of the functional form in a regression model, … and … numerical information about particular parameters in a model.” This is why Sprenger (2018) argues that the so-called “subjective Bayesianism” should in fact be understood as “objective,” thereby defending the language of “objectivity” in science.

Good Science Is a Matter of Ethics, But not Alone

I agree with Gelman and Hennig that the dichotomy of “subjective” and “objective” causes a lot of confusion in science, especially when it is applied to classify statistical methodology. It is misleading to (dis)qualify Bayesian statistics as “subjective” when prior probability functions for each parameter in a model are defined with great rigor and transparency. It is also misleading when frequentist researchers use default settings in analyses and claim “objectivity” on their side.

This is, however, not so much a question of ethics. Nor can this tension be solved with introducing rules for the virtuous scientist. It is rather a symptom of a fundamental epistemological crisis in modern science. The philosophy of science has been too detached from the empirical sciences and statistics for too long, and those gaps need to be bridged with the education of scientists in epistemology, a claim made by Meehl more than 20 years ago (Meehl, 1997). The enhanced rigor of the scientific enquiry will then follow, since the scientific virtues are inspired by the epistemic beliefs that scientists hold. We simply need to learn again to argue for our epistemological stances, and to define the epistemic claims we make with our statistical analyses, given the data. The epistemologically informed scientist would certainly not be scared to endorse subjectivity as a reliable philosophical concept for empirical science, as Putnam has shown.

Or, as Ian Hacking wittingly summarizes this crisis, all we need to do is think harder, not more objectively (Hacking, 2015).

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Funding

The University of Stavanger supported this work with a sabbatical and a grant in the program for Yngre Fremragende Forskere financed by the Norwegian Research Council (project number IN11714).

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The brief summary of Putnam's account is nearly verbatim taken from my dissertation (Dettweiler, 2015). Educational Research in the Mirror of Nature. Theoretical, Epistemological, and Empirical Aspects of Mixed-Method Approaches in Outdoor Education (Ph.D. Thesis). Technische Universität München, München). Many thanks to Dr. Mike Rogerson and the reviewer, whose valuable comments very much improved the line of thought in this article.

References

Amrhein, V., Greenland, S., and McShane, B. (2019). Retire statistical significance. Nature 567, 305–307. doi: 10.1038/d41586-019-00857-9

CrossRef Full Text | Google Scholar

Brown, C. (1988). Internal realism: transcendental idealism? Midwest Stud. Philos. 12, 145–155.

Google Scholar

Cronbach, L. J. (1986). “Social inquiry by and for earthlings,” in Metatheory in Social Science. Pluralisms and Subjectivities, eds D. W. Fiske and R. A. Shweder (Chicago: The University of Chicago Press, 83–107.

Google Scholar

Cruickshank, J. (2003). Realism and Sociology: Anti-Foundationalism, Ontology, and Social Research, Vol. 5. London, New York, NY: Routledge.

Google Scholar

Dettweiler, U. (2015). Educational Research in the Mirror of Nature. Theoretical, Epistemological, and Empirical Aspects of Mixed-Method Approaches in Outdoor Education. PhD Thesis, Technische Universität München, München. Retrieved from: http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:91-diss-20150602-1241435-1-3

Google Scholar

Fiske, D. W., and Shweder, R. A. (eds.). (1986). Metatheory in Social Science. Pluralisms and Subjectivities. Chicago: The University of Chicago Press.

Google Scholar

Gelman, A., and Hennig, C. (2017). Beyond subjective and objective in statistics. J. R. Stat. Soc. Ser. A 180, 967–1033. doi: 10.1111/rssa.12276

CrossRef Full Text | Google Scholar

Gelman, A., and Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. New York; Cambridge: Cambridge University Press.

Google Scholar

Gelman, A., and Loken, E. (2014). The statistical crisis in science. Am. Sci. 102:460. doi: 10.1511/2014.111.460

CrossRef Full Text | Google Scholar

Hacking, I. (2015). “Let's not talk about objectivity,” in Objectivity in Science, eds F. Padovani, A. Richardson, and J. Y. Tsou (Cham: Springer, 19–33.

Google Scholar

Horowitz, A. (1996). Putnam, searle, and externalism. Philos. Stud. 81, 27–69.

Google Scholar

Mayo, D. G. (1996). Error and the Growth of Experimental Knowledge. Chicago; London: University of Chicago Press.

Google Scholar

Meehl, P. (1997). “The problem is epistemology, not statistics: replace significance tests by confidence intervals and quantify accuracy of risky numerical predictions,” in What If There Were No Significance Tests? eds L. L. Harlow, S. A. Mulaik, and J. H. Steiger (Mahwah: Erlbaum, 393–425.

Google Scholar

Putnam, H. (1981). Reason, Truth and History. Cambridge: Cambridge University Press.

Google Scholar

Putnam, H. (1983). Realism and Reason. Cambridge: Cambridge University Press.

Google Scholar

Rodgers, J. L. (2010). The epistemology of mathematical and statistical modeling: a quiet methodological revolution. Am. Psychol. 65, 1–12. doi: 10.1037/a0018326

PubMed Abstract | CrossRef Full Text | Google Scholar

Searle, J. R. (1999). Mind, Language and Society: Doing Philosophy in the Real World. London: Weidenfeld and Nicolson.

Google Scholar

Sprenger, J. (2016). “Bayesianism vs. frequentism in statistical inference,” in The Oxford Handbook of Probability and Philosophy, eds A. Hájek and C. Hitchcock (Oxford: Oxford University Press, 382–405.

Google Scholar

Sprenger, J. (2018). The objectivity of subjective Bayesianism. Eur. J. Philos. Sci. 8, 539–558. doi: 10.1007/s13194-018-0200-1

CrossRef Full Text | Google Scholar

Sprenger, J., and Hartmann, S. (2019). Bayesian Philosophy of Science. Variations on a Theme by the Reverend Thomas Bayes. Oxford: Oxford University Press.

Google Scholar

Wasserstein, R. L., Schirm, A. L., and Lazar, N. A. (2019). Moving to a World Beyond “p < 0.05”. Am. Stat. 73, 1–19. doi: 10.1080/00031305.2019.1583913

CrossRef Full Text | Google Scholar

Keywords: Bayesian statistics, frequentist statistics, epistemology, prior probability function, rationality of science, philosophy of science

Citation: Dettweiler U (2019) The Rationality of Science and the Inevitability of Defining Prior Beliefs in Empirical Research. Front. Psychol. 10:1866. doi: 10.3389/fpsyg.2019.01866

Received: 29 June 2019; Accepted: 29 July 2019;
Published: 13 August 2019.

Edited by:

Alessandro Giuliani, Istituto Superiore di Sanità (ISS), Italy

Reviewed by:

Ryota Nomura, The University of Tokyo, Japan

Copyright © 2019 Dettweiler. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Ulrich Dettweiler, ulrich.dettweiler@uis.no

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.