Bias and values in scientific research

https://doi.org/10.1016/j.shpsa.2008.12.005Get rights and content

Abstract

When interests and preferences of researchers or their sponsors cause bias in experimental design, data interpretation or dissemination of research results, we normally think of it as an epistemic shortcoming. But as a result of the debate on science and values, the idea that all ‘extra-scientific’ influences on research could be singled out and separated from pure science is now widely believed to be an illusion. I argue that nonetheless, there are cases in which research is rightfully regarded as epistemologically deficient due to the influence of preferences on its outcomes. I present examples from biomedical research and offer an analysis in terms of social epistemology.

Introduction

Bias is becoming increasingly recognized as a serious problem in many areas of scientific research. Of particular concern are cases in which research results seem directly to reflect the preferences and interests of certain actors involved in the research process. Troubling examples of this have been identified, especially in privately funded research and in policy-related areas.

Intuitively (and traditionally) it seems clear that the suggested kind of bias constitutes outright epistemic failure. But philosophers of science have begun to realize that the ideal of pure and value-free science is at best just that—an ideal—and that all scientific practice involves all kinds of value-judgments. While some philosophers have sought to distinguish acceptable from unacceptable influences of values on science, efforts to draw this distinction in a principled way have proven immensely difficult (see Sect. 6). So why should not some values that inform scientific research be, for example, shareholder values?

My primary aim in this paper is to describe and define the suggested kind of bias in a way that allows us to characterize it as an epistemic shortcoming of the research in question. I will end up arguing that one need not deny the inevitable value-ladenness of science in order to mark certain cases of bias as being scientifically unacceptable.

Note that my aim is not to analyze the concept of bias. There are many widely differing uses of ‘bias’ both within science and within philosophy—enough to suggest that the word is polysemic (cf. Gluud, 2006, Goldman, 1999, Sect. 8.3; Resnik, 2000). I am interested in a certain phenomenon, which I will introduce with the help of examples in the following section and try to characterize provisionally.

Section snippets

Preference bias

In the context of science and values, a phenomenon that I will call preference bias is of particular interest. It occurs when a research result unduly reflects the researchers’ preference for it over other possible results. (Note that this is a special kind of bias; the term ‘bias’ is also often applied to cases of systematic error that need have nothing to do with investigators’ preferences for one result or another. A classic example is the kind of bias in clinical trials introduced by

Preference bias and inductive risk

In every empirical investigation that is designed to test some hypothesis H, two kinds of risk can be identified: the risk that the investigation may lead to the acceptance of H while H is in fact false, and, conversely, the risk of rejecting H when H is in fact true. Carl Hempel (1965, pp. 91–92) has coined the term ‘inductive risk’ to cover these two types of risk. It was recognized early on in the development of statistics that in contexts relevant for practical applications, the

Inductive risk and the evaluation of outcomes

This tentative analysis of preference bias in terms of inductive risk seems to face a serious problem, though. The analysis under consideration starts from the implicit assumption that there is a certain correct or impartial balance between the two kinds of inductive risk that exists independent of the researcher’s preferences, and that preference bias consists in the deviation from that balance. However, it has long been argued that in cases where the aim is to accept or reject a hypothesis on

The ideal of purity

There is one sense in which presumably everyone would agree that the shortcomings of cases of bias are relative to a value; they are at least relative to whichever value is ascribed to replacing ignorance with true belief. A possibility to save the intuition appealed to at the end of the last section might therefore be to describe the treatment of inductive risk as relative to this value and only this value. A response to the challenge of Churchman and Rudner along these lines was attempted by

Relaxed purity

However, I submit that regarding L-bias as an analysis of real-world cases of preference bias would be much too simple. It presupposes a sense of purity of epistemic activity that is exaggerated and unrealistic. To begin with, it has long been recognized that science, even if conceived as essentially a truth-seeking enterprise, does not pursue each truth with the same eagerness. In the terminology introduced by Kitcher, 1993, Kitcher, 2001, science aims to find significant truths, where

Trust and bias: the perspective of social epistemology

From the vantage point of individualist epistemology, informed by the insight that purist strictures of value-free science cannot be generally upheld, it thus still appears that the cases described at the outset simply reflect the variability of scientific procedure under different admissible value judgments. Remarkably, this is not how the biomedical research community seems to regard the matter. Instead, the community employs a variety of social mechanisms in order to set up conventional

Conclusions

I have maintained that preference bias consists in the infringement of conventional standards entertained by the respective research community. This analysis captures the intuition that preference bias constitutes an epistemic shortcoming, as the conventional standards themselves are adopted by the community in an effort to make possible and preserve epistemic trust and to ensure the community’s capability of fulfilling its epistemological roles. It also explains why the diagnosis of preference

Acknowledgements

I would like to thank Justin Biddle, Jim Brown, Martin Carrier, Cornelis Menke, Birgitte Wandall, Ken Westphal, Eric Winsberg, Alison Wylie and an anonymous referee for this journal for their helpful remarks on earlier versions of this paper.

References (59)

  • B. Djulbegovic et al.

    The uncertainty principle and industry-sponsored research

    The Lancet

    (2000)
  • Association of American Medical Colleges, Task Force on Financial Conflicts of Interest in Clinical Research. (2001)....
  • C.S. Barrow et al.

    Assessing the reliability and credibility of industry science and scientists

    Environmental Health Perspectives

    (2006)
  • J.E. Bekelman et al.

    Scope and impact of financial conflicts of interest in biomedical research: A systematic review

    Journal of the American Medical Association

    (2003)
  • J. Biddle

    Lessons from the Vioxx debacle: What the privatization of science can teach us about social epistemology

    Social Epistemology

    (2007)
  • M. Blettner et al.

    Traditional reviews, meta-analyses and pooled analyses in epidemiology

    International Journal of Epidemiology

    (1999)
  • D. Blumenthal et al.

    Withholding research results in academic life science: Evidence from a national survey of faculty

    Journal of the American Medical Association

    (1997)
  • Boseley, S. (2006). Renowned cancer scientist was paid by chemical firm for 20 years. The Guardian, 8 December,...
  • A.-W. Chan et al.

    Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles

    Journal of the American Medical Association

    (2004)
  • C.W. Churchman

    Theory of experimental inference

    (1948)
  • S.L. Coyle

    Physician–industry relations, Pt. 1. Individual physicians

    Annals of Internal Medicine

    (2002)
  • H.P. Dietz

    Bias in research and conflict of interest: Why should we care?

    International Urogynecology Journal

    (2007)
  • R. Doll

    Effects of exposure to vinyl chloride: An assessment of the evidence

    Scandinavian Journal of Work, Environment & Health

    (1988)
  • H. Douglas

    Inductive risk and values in science

    Philosophy of Science

    (2000)
  • M.F.W. Festing et al.

    Guidelines for the design and statistical analysis of experiments using laboratory animals

    ILAR Journal

    (2002)
  • L.L. Gluud

    Bias in clinical intervention research

    American Journal of Epidemiology

    (2006)
  • A.I. Goldman

    Knowledge in a social world

    (1999)
  • Hempel, C. G. (1965). Science and human values. In idem, Aspects of scientific explanation (pp. 81–96). New York: Free...
  • International Committee of Medical Journal Editors. (2007). Uniform requirements for manuscripts submitted to...
  • R.C. Jeffrey

    Valuation and acceptance of scientific hypotheses

    Philosophy of Science

    (1956)
  • H.K. Johansen et al.

    Problems in the design and reporting of trials of antifungal agents encountered during meta-analysis

    Journal of the American Medical Association

    (1999)
  • P.D. Johnson et al.

    Practical aspects of experimental design in animal research

    ILAR Journal

    (2002)
  • P. Kitcher

    The advancement of science

    (1993)
  • P. Kitcher

    Science, truth and democracy

    (2001)
  • L.L. Kjaergard et al.

    Association between competing interests and authors’ conclusions: Epidemiological study of randomized clinical trials published in the BMJ

    British Medical Journal

    (2002)
  • Korn, D., & Ehringhaus, S. (2006). Principles for strengthening the integrity of clinical research. PLoS Clinical...
  • J.A. Kourany

    A philosophy of science for the twenty-first century

    Philosophy of Science

    (2003)
  • S. Krimsky

    Science in the private interest: Has the lure of profits corrupted biomedical research?

    (2003)
  • Kuhn, T. S. (1977). Objectivity, value judgment, and theory choice. In idem, The essential tension (pp. 320–339)....
  • Cited by (173)

    • No one solution to the “new demarcation problem”?: A view from the trenches

      2022, Studies in History and Philosophy of Science
      Citation Excerpt :

      Indeed, a researcher may simply fail to self-scrutinize his or her own work in a setting where peer oversight is lax. However it occurs, the resulting biases are characterized by the fact that they diverge from communal standards and shared scientific ideas (Wilholt, 2009, 2013), and in many cases skeptical peers have not even been engaged to review and question the embedded biases. In theory, then, locating this type of ends-oriented bias would seem straightforward; simply empanel a blue-ribbon panel of experts to replicate and/or review all policy-relevant research and syntheses.

    View all citing articles on Scopus
    View full text