Bias and values in scientific research
Introduction
Bias is becoming increasingly recognized as a serious problem in many areas of scientific research. Of particular concern are cases in which research results seem directly to reflect the preferences and interests of certain actors involved in the research process. Troubling examples of this have been identified, especially in privately funded research and in policy-related areas.
Intuitively (and traditionally) it seems clear that the suggested kind of bias constitutes outright epistemic failure. But philosophers of science have begun to realize that the ideal of pure and value-free science is at best just that—an ideal—and that all scientific practice involves all kinds of value-judgments. While some philosophers have sought to distinguish acceptable from unacceptable influences of values on science, efforts to draw this distinction in a principled way have proven immensely difficult (see Sect. 6). So why should not some values that inform scientific research be, for example, shareholder values?
My primary aim in this paper is to describe and define the suggested kind of bias in a way that allows us to characterize it as an epistemic shortcoming of the research in question. I will end up arguing that one need not deny the inevitable value-ladenness of science in order to mark certain cases of bias as being scientifically unacceptable.
Note that my aim is not to analyze the concept of bias. There are many widely differing uses of ‘bias’ both within science and within philosophy—enough to suggest that the word is polysemic (cf. Gluud, 2006, Goldman, 1999, Sect. 8.3; Resnik, 2000). I am interested in a certain phenomenon, which I will introduce with the help of examples in the following section and try to characterize provisionally.
Section snippets
Preference bias
In the context of science and values, a phenomenon that I will call preference bias is of particular interest. It occurs when a research result unduly reflects the researchers’ preference for it over other possible results. (Note that this is a special kind of bias; the term ‘bias’ is also often applied to cases of systematic error that need have nothing to do with investigators’ preferences for one result or another. A classic example is the kind of bias in clinical trials introduced by
Preference bias and inductive risk
In every empirical investigation that is designed to test some hypothesis H, two kinds of risk can be identified: the risk that the investigation may lead to the acceptance of H while H is in fact false, and, conversely, the risk of rejecting H when H is in fact true. Carl Hempel (1965, pp. 91–92) has coined the term ‘inductive risk’ to cover these two types of risk. It was recognized early on in the development of statistics that in contexts relevant for practical applications, the
Inductive risk and the evaluation of outcomes
This tentative analysis of preference bias in terms of inductive risk seems to face a serious problem, though. The analysis under consideration starts from the implicit assumption that there is a certain correct or impartial balance between the two kinds of inductive risk that exists independent of the researcher’s preferences, and that preference bias consists in the deviation from that balance. However, it has long been argued that in cases where the aim is to accept or reject a hypothesis on
The ideal of purity
There is one sense in which presumably everyone would agree that the shortcomings of cases of bias are relative to a value; they are at least relative to whichever value is ascribed to replacing ignorance with true belief. A possibility to save the intuition appealed to at the end of the last section might therefore be to describe the treatment of inductive risk as relative to this value and only this value. A response to the challenge of Churchman and Rudner along these lines was attempted by
Relaxed purity
However, I submit that regarding L-bias as an analysis of real-world cases of preference bias would be much too simple. It presupposes a sense of purity of epistemic activity that is exaggerated and unrealistic. To begin with, it has long been recognized that science, even if conceived as essentially a truth-seeking enterprise, does not pursue each truth with the same eagerness. In the terminology introduced by Kitcher, 1993, Kitcher, 2001, science aims to find significant truths, where
Trust and bias: the perspective of social epistemology
From the vantage point of individualist epistemology, informed by the insight that purist strictures of value-free science cannot be generally upheld, it thus still appears that the cases described at the outset simply reflect the variability of scientific procedure under different admissible value judgments. Remarkably, this is not how the biomedical research community seems to regard the matter. Instead, the community employs a variety of social mechanisms in order to set up conventional
Conclusions
I have maintained that preference bias consists in the infringement of conventional standards entertained by the respective research community. This analysis captures the intuition that preference bias constitutes an epistemic shortcoming, as the conventional standards themselves are adopted by the community in an effort to make possible and preserve epistemic trust and to ensure the community’s capability of fulfilling its epistemological roles. It also explains why the diagnosis of preference
Acknowledgements
I would like to thank Justin Biddle, Jim Brown, Martin Carrier, Cornelis Menke, Birgitte Wandall, Ken Westphal, Eric Winsberg, Alison Wylie and an anonymous referee for this journal for their helpful remarks on earlier versions of this paper.
References (59)
- et al.
The uncertainty principle and industry-sponsored research
The Lancet
(2000) - Association of American Medical Colleges, Task Force on Financial Conflicts of Interest in Clinical Research. (2001)....
- et al.
Assessing the reliability and credibility of industry science and scientists
Environmental Health Perspectives
(2006) - et al.
Scope and impact of financial conflicts of interest in biomedical research: A systematic review
Journal of the American Medical Association
(2003) Lessons from the Vioxx debacle: What the privatization of science can teach us about social epistemology
Social Epistemology
(2007)- et al.
Traditional reviews, meta-analyses and pooled analyses in epidemiology
International Journal of Epidemiology
(1999) - et al.
Withholding research results in academic life science: Evidence from a national survey of faculty
Journal of the American Medical Association
(1997) - Boseley, S. (2006). Renowned cancer scientist was paid by chemical firm for 20 years. The Guardian, 8 December,...
- et al.
Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles
Journal of the American Medical Association
(2004) Theory of experimental inference
(1948)
Physician–industry relations, Pt. 1. Individual physicians
Annals of Internal Medicine
Bias in research and conflict of interest: Why should we care?
International Urogynecology Journal
Effects of exposure to vinyl chloride: An assessment of the evidence
Scandinavian Journal of Work, Environment & Health
Inductive risk and values in science
Philosophy of Science
Guidelines for the design and statistical analysis of experiments using laboratory animals
ILAR Journal
Bias in clinical intervention research
American Journal of Epidemiology
Knowledge in a social world
Valuation and acceptance of scientific hypotheses
Philosophy of Science
Problems in the design and reporting of trials of antifungal agents encountered during meta-analysis
Journal of the American Medical Association
Practical aspects of experimental design in animal research
ILAR Journal
The advancement of science
Science, truth and democracy
Association between competing interests and authors’ conclusions: Epidemiological study of randomized clinical trials published in the BMJ
British Medical Journal
A philosophy of science for the twenty-first century
Philosophy of Science
Science in the private interest: Has the lure of profits corrupted biomedical research?
Cited by (173)
Increasing diversity in developmental cognitive neuroscience: A roadmap for increasing representation in pediatric neuroimaging research
2022, Developmental Cognitive NeuroscienceNo one solution to the “new demarcation problem”?: A view from the trenches
2022, Studies in History and Philosophy of ScienceCitation Excerpt :Indeed, a researcher may simply fail to self-scrutinize his or her own work in a setting where peer oversight is lax. However it occurs, the resulting biases are characterized by the fact that they diverge from communal standards and shared scientific ideas (Wilholt, 2009, 2013), and in many cases skeptical peers have not even been engaged to review and question the embedded biases. In theory, then, locating this type of ends-oriented bias would seem straightforward; simply empanel a blue-ribbon panel of experts to replicate and/or review all policy-relevant research and syntheses.
When do non-epistemic values play an epistemically illegitimate role in science? How to solve one half of the new demarcation problem
2022, Studies in History and Philosophy of ScienceOn economic modeling of carbon dioxide removal: Values, bias, and norms for good policy-Advising modeling
2022, Global SustainabilityObjectivity, shared values, and trust
2024, Synthese