Skip to main content

Advertisement

Log in

Challenging research on human subjects: justice and uncompensated harms

  • Published:
Theoretical Medicine and Bioethics Aims and scope Submit manuscript

Abstract

Ethical challenges to certain aspects of research on human subjects are not uncommon; examples include challenges to first-in-human trials (Chapman in J Clin Res Bioethics 2(4):1–8, 2011), certain placebo controlled trials (Anderson in J Med Philos 31:65–81, 2006; Anderson and Kimmelman in Kennedy Inst Ethics J 20(1):75–98, 2010) and “sham” surgery (Macklin in N Engl J Med 341:992–996, 1999). To date, however, there are few challenges to research when the subjects are competent and the research is more than minimal risk with no promise of direct benefit. The principal reason given for allowing research that is more than minimal risk without benefit is that we should respect the autonomy of competent subjects. I argue that though the moral intuitions informing respect for autonomy are sound, there is another set of intuitions regarding what we take to be just treatment of another when one agent knowingly causes or allows suffering on another agent. I argue that concerns generated by commutative justice serve as limitations on permissible research. I highlight our intuitions informing this notion of justice by appealing to work done on theodicy; what counts as a morally sufficient reason for God to allow suffering in humans is applicable also to the researcher-subject relationship. I conclude that all human subjects who are exposed to more than minimal risk research should enjoy the same actual protections (e.g., subpart D) as those given subjects who cannot consent.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. For a good description of “vital importance” see [6, p. 134].

  2. See [7, p. 206]. Kong notes the following: “Medical research is a social activity whose principle justification is medical progress for which the assumed beneficiary is society.”

  3. Vulnerable populations are ones for whom informed consent is severely compromised through developmental immaturity (fetus, children), degenerative disease (mentally disabled, Alzheimer’s), or through environmental factors which may be coercive (prisoners).

  4. A new subpart and/or Guidance document is being considered for adult subjects who do not have decision making capacity. In this category would be advanced Alzheimer’s patients, or the mentally disabled/mentally ill; see [8].

  5. There are, however, other sources of vulnerability; see [9, pp. 15ff.].

  6. The Belmont Report, for example, says the following: “The principle of respect for persons thus divides into two separate moral requirements: the requirement to acknowledge autonomy and the requirement to protect those with diminished autonomy” [10, p. 482; emphasis mine].

  7. Neil C. Manson and Onora O’Neill note that a subject’s consent has the effect of waiving her rights not to be harmed or experimented upon; see [11, pp. 72ff.]. For a related though more radical account, see [12].

  8. There are, however, problems with the concept of minimal risk. According to Wendler et. al., IRBs are too strict in their interpretation of minimal risk [13]. They point out that the statistical prevalence of injury and death from “daily life” are fairly high and yet people tolerate such risks. But Wendler points out that daily life risks are not analogous to research induced risks [14]. First, many risks of daily life cannot be controlled, but choosing to be a research subject can. Second, we tolerate risks of certain activities because of the joy or pleasure we derive from competitive activity, which includes even risky sports like football. But research that presents the risk of broken bones or torn ACLs does not always supply, on its own, a compensating benefit. Instead, Wendler settles on a “charitable participation” standard.

  9. The other requirements pertain to scientific design, equitable selection of subjects, informed consent issues, and safety monitoring.

  10. Friedman et al. quote from [19, n. 6].

  11. There is room to run an analogical argument, but I will not pursue that here. David Wendler has recently made the correct observation that what generates the ethical concern in research is that the researcher is in a position in which she exposes others to certain serious risks. This is a key feature that could ground the analog between theodicy and research ethics. See [21].

  12. I should note here that the sufferer is one who suffers undeservedly. Self-inflicted harm, of any sort, is a species of wrongdoing and suffering caused by one’s own wrongdoing is not considered a problem for theism.

  13. Bruce Russell offers the real-life example of Sue, a five-year old who was raped, beaten, and killed by her mother’s boyfriend. Obviously, this evil is apparently gratuitous no matter how good the consequences are downstream; see [25].

  14. Notice that what generates the criteria of what would count as a morally sufficient reason is not God per se but the moral principle, or at least a moral intuition that an agent who inflicts harm on another can only be justified in doing so if the sufferer benefits from the harm. And the thought experiment simply highlights this intuition. I mention this to be clear that I am not running an analogical argument between God and researcher.

  15. This case is discussed by many, notably [30, 31]. Most commentators on this case assume that consent does not justify Meiwes’s act of killing. And of course, I use this case not to draw an analogy with what researchers do, but to explore the “moral magic” of consent.

  16. A possible complication with my analysis is her example of getting a tattoo. My reading of this is that tattoo-giving is a service, and qua giving a service it depends on consent insofar as service giving typically involves a request for the service. Taking a needle with ink on it to a non-requesting person is not the giving of a service but is rightly categorized by Hurd as an instance of maiming.

  17. The notion of harm I am assuming here overlaps with suffering. Harm involves damage to one’s health broadly construed (including psycho-social health so as to include risky behavioral research). I do not hold to an interest-frustration account of harm. Explaining why is clearly a different project.

  18. Recall that although my argument emphasizes the need for a compensating benefit, I would add that knowledge of vital importance can function as a justifier for causing or allowing harms on another. But adding this feature does not impugn my overall thesis because justification of risky pediatric research countenances knowledge of vital importance as a legitimate justifier. And if both adult and pediatric research are justified in virtue of the same ethical considerations, pediatric subjects do not enjoy greater protection.

  19. To tether my reflections here to a particular case, I am thinking of the TGN 1412 trial which offered subjects about 3,500 USD. Now consider the study without such a payment offer. It is apparent that no one would consent to it; see [37].

  20. Resnick and Koski recommend having a national registry of health volunteers since quite a few of them participate in numerous trials. The worry is that some do not wait for the required “wash out” period which may magnify the risks to their health; see [38].

  21. I relegate to a footnote what is likely a key premise in my argument here only because this is not an article on rational choice or human action theory. Following Talbot Brewer [39], I consider a rational action as involving, at least, an apprehension of something good or worthwhile. Intentional actions must involve reasons, and the motivational force of such reasons is explained by an apprehension of something good. In a discussion on the desire-satisfaction theory of rational action, Brewer notes in response:

    One does not count as an agent simply in virtue of consistently behaving in ways that effectively bring about certain describable state of affairs [affairs seen as desirable]…. To be an agent is to set oneself in motion … on the strength of one’s sense that something counts in favor of doing so. That performing some action would bring about some state of affairs cannot intelligibly be regarded as counting in favor of performing the action unless one sees the state of affairs, or the effort to produce it, as itself good or valuable…. Desires can figure centrally in the rationalizing explanation of actions only if they involve a sense of the point or value of acting in the way they incline us to act, and only if they motivate us by inducing us to act on the strength of this evaluative outlook. [39, p. 28]

    On Brewer’s view, apprehending an end as good is a necessary condition for rational action. Returning to the example of entering into a study with risks grossly overriding any benefits, it is hard to appreciate any reason for entering the study focusing just on the benefits and burdens the study promises.

  22. The Office of Human Research Protections gives the following guidance, “Direct payments or other forms of remuneration offered to potential subjects as an incentive or reward for participation should not be considered a ‘benefit’ to be gained from research…. Although participation in research may be a personally rewarding activity or a humanitarian contribution, these subjective benefits should not enter into the IRB's analysis of benefits and risks” [40; emphasis added].

  23. For a potentially representative voice, see [43]. I say only “potentially” since although Rosenfield says he thinks that subpart D is a “barrier” to good clinical research, he never argues for this claim. His only protestations concern the extended review process of his own study, which was reviewed under category §46.407. No discernible challenge to the ethical standards of subpart D was issued.

  24. Emphasis mine. This clause is meant to refer an IRB to the risks/benefits of the research, and not the risks/benefits of clinical care. But clinical care is, logically, an alternative to research.

  25. I would like to thank Fr. Jim McCartney, Mark Doorley, Barbara Ott, Brett Wilmot, John Carvalho, and Peter Wicks for comments and discussion on the arguments presented here and to two anonymous referees who provided extensive and very helpful comments on a previous draft.

References

  1. Chapman, Audrey R. 2011. Addressing the ethical challenges of first-in-human trials. Journal of Clinical Research and Bioethics 2(4): 1–8.

    Google Scholar 

  2. Anderson, A. James. 2006. The ethics and science of placebo-controlled trials: Assay sensitivity and the Duhem–Quine thesis. Journal of Medicine and Philosophy 31: 65–81.

    Article  Google Scholar 

  3. Anderson, A. James, and Jonathan Kimmelman. 2010. Extending clinical equipoise to phase 1 trials involving patients: unresolved problems. Kennedy Institute of Ethics Journal 20(1): 75–98.

    Article  Google Scholar 

  4. Macklin, Ruth. 1999. The ethical problem with sham surgery in clinical research. New England Journal of Medicine 341: 992–996.

    Article  Google Scholar 

  5. Miller, G. Franklin, and Alan Wertheimer. 2007. Facing up to paternalism in research ethics. Hastings Center Report 37(3): 24–34.

    Article  Google Scholar 

  6. Field, J. Marilyn, and Richard E. Behrman (eds.). 2004. Ethical conduct of clinical research involving children. Washington, DC: National Academies Press.

    Google Scholar 

  7. Kong, W.M. 2005. Legitimate requests and indecent proposals: Matters of justice in the ethical assessment of phase I trials involving competent patients. Journal of Medical Ethics 31: 205–208.

    Article  Google Scholar 

  8. Secretary Advisory Committee on Human Research Protections. 2009. March 34 meeting presentations. http://www.hhs.gov/ohrp/sachrp/mtgings/mtg03-09/present.html. Accessed Sep 12 2011.

  9. Coleman, Carl H. 2009. Vulnerability in biomedical research. Journal of Law, Medicine and Ethics 37: 12–18.

    Article  Google Scholar 

  10. Elizabeth, Bankert A., and Robert Amdur (eds.). 2006. Institutional review board: Management and function, 2nd ed. Sudbury, MA: Jones and Bartlett.

    Google Scholar 

  11. Manson, Neil C., and Onora O’Neill. 2007. Rethinking informed consent in bioethics. New York: Cambridge University Press.

    Book  Google Scholar 

  12. Hurd, Heidi M. 1996. The moral magic of consent. Legal Theory 2(2): 121–146.

    Article  Google Scholar 

  13. Wendler, David, Leah Belsky, Kimberly M. Thompson, and Ezekiel Emmanuel. 2005. Quantifying the federal minimal risk standard: Implications for pediatric research without a prospect of direct benefit. JAMA 294(7): 826–832.

    Article  Google Scholar 

  14. Wendler, David. 2005. Protecting subjects who cannot give consent: Toward a better standard for “minimal” risks. Hastings Center Report 35(5): 37–43.

    Article  Google Scholar 

  15. U.S. Code of Federal Regulations. 2009. Protection of human subjects. 45 CFR 46. http://ohsr.od.nih.gov/guidelines/45cfr46.html. Accessed Sep 21 2011.

  16. King, Nancy. 2000. Defining and describing benefit appropriately in clinical trials. Journal of Law, Medicine, and Ethics 28: 332–343.

    Article  Google Scholar 

  17. Weijer, Charles, and Paul B. Miller. 2004. When are research risks reasonable in relation to anticipated benefits? Nature Medicine 10(6): 570–573.

    Article  Google Scholar 

  18. Friedman, Alexander, Emily Robbins, and David Wendler. 2012. Which benefits of research participation count as ‘direct’? Bioethics 26(2): 60–67.

    Article  Google Scholar 

  19. National Council for Science and Technology. 2004. Guidelines for ethical conduct of biomedical research involving human subjects in Kenya. Nairobi: National Council for Science and Technology.

    Google Scholar 

  20. Gettier, Edmund. 1963. Is justified true belief knowledge? Analysis 23: 121–123.

    Google Scholar 

  21. Wendler, David. 2011. What we worry about when we worry about the ethics of clinical research. Theoretical Medicine and Bioethics 32: 161–180.

    Article  Google Scholar 

  22. Zagzebski, Linda. 2004. Divine motivation theory. New York: Cambridge University Press.

    Book  Google Scholar 

  23. Stump, Eleonore. 2010. Wandering in darkness: Narrative and the problem of suffering. New York: Oxford University Press.

    Book  Google Scholar 

  24. Stump, Eleonore. 1996. Aquinas on the sufferings of job. In The evidential argument from evil, ed. Daniel Howard-Snyder, 49–68. Bloomington: Indiana University Press.

    Google Scholar 

  25. Russell, Bruce. 1989. The persistent problem of evil. Faith and Philosophy 6(2): 121–139.

    Article  Google Scholar 

  26. Emanuel, Ezekiel J., David Wendler, and Christine Grady. 2000. What makes clinical research ethical? JAMA 283(20): 2701–2711.

    Article  Google Scholar 

  27. Hurd, Heidi M. 2005. Blaming the victim: A response to the proposal that criminal law recognize a general defense of contributory responsibility. Buffalo Criminal Law Review 8: 503–522.

    Article  Google Scholar 

  28. Dempsey, Michelle M. 2012. Victimless conduct and the volenti maxim: How consent works. Criminal Law and Philosophy. doi:10.1007/s11572-012-9162-0.

  29. Finn, Peter. 2003. Cannibal case grips Germany; Suspect says internet correspondent volunteered to die. Washington Post, December 4: A26.

  30. Bergelson, Vera. 2008. Autonomy, dignity, and consent to harm. Rutgers Law Review 60: 723–736.

    Google Scholar 

  31. Tadros, Victor. 2011. Consent to harm. Current Legal Problems 64: 23–49.

    Article  Google Scholar 

  32. Stump, Eleonore. 1986. Providence and the problem of evil. In Christian philosophy, ed. Thomas Flint, 51–91. Notre Dame, IN: Notre Dame University Press.

    Google Scholar 

  33. Tollefsen, Christopher. 2006. Is a purely first-person account of human action defensible. Ethical Theory and Moral Practice 9: 441–460.

    Article  Google Scholar 

  34. Jonas, Hans. 1969. Philosophical reflections on experimenting with human subjects. Daedalus 98(2): 219–247.

    Google Scholar 

  35. Appelbaum, Paul S., Loren H. Roth, Charles W. Lidz, Paul Benson, and William Winslade. 1987. False hopes and best data: Consent to research and the therapeutic misconception. Hastings Center Report 17(2): 20–24.

    Article  Google Scholar 

  36. Almeida, Luis, Benedita Azevedo, Teresa Nunes, Manuel Vaz-da-Silva, and Patrício Soares-da-Silva. 2007. Why healthy subjects volunteer for phase I studies and how they perceive their participation? European Journal of Clinical Pharmacology 63(11): 1085–1094.

    Article  Google Scholar 

  37. Emanuel, Ezekiel J., and Franklin G. Miller. 2007. Money and distorted ethical judgment about research: Ethical assessment of the TeGenero TGN1412 trial. American Journal of Bioethics 7: 76–81.

    Article  Google Scholar 

  38. Resnik, David B., and Greg Koski. 2011. A national registry for healthy volunteers in phase 1 clinical trials. JAMA 305(12): 1236–1237.

    Article  Google Scholar 

  39. Brewer, Talbot. 2009. The retrieval of ethics. New York: Oxford University Press.

    Book  Google Scholar 

  40. Office of Human Research Protections. 1993. Institutional review board guidebook: chapter III, basic IRB review. http://www.hhs.gov/ohrp/archive/irb/irb_chapter3.htm. Accessed Sep 21, 2011.

  41. Garattini, Silvio, and Vittorio Bertele. 2007. Non-inferiority trials are unethical because they disregard patients’ interests. Lancet 370: 1875–1877.

    Article  Google Scholar 

  42. Gagne, Joshua J., and Niteesh K. Choudhry. 2011. How many “Me-Too” drugs is too many? JAMA 305(7): 711–712.

    Article  Google Scholar 

  43. Rosenfield, Robert L. 2008. Improving balance in regulatory oversight of research in children and adolescents: a clinical investigator’s perspective. Annals of the New York Academy of Sciences 1135: 287–295.

    Article  Google Scholar 

  44. Barry J. Marshall. 2005. Nobel Prize in physiology and medicine: Autobiography. Nobel Foundation. http://nobelprize.org/nobel_prizes/medicine/laureates/2005/marshall-autobio.html. Accessed May 7, 2012.

  45. Resnik, David B. 2012. Limits on risks for healthy volunteers in biomedical research. Theoretical Medicine and Bioethics 33(2): 137–149.

    Article  Google Scholar 

  46. Bazerman, Max H., and Ann E. Tenbrunsel. 2011. Ethical breakdowns. Harvard Business Review 89(4): 58–65.

  47. Miller, Frank, and Stephen Joffe. 2009. Limits to research risks. Journal of Medical Ethics 35(7): 445–449.

    Article  Google Scholar 

  48. Elliot, Carl. 1995. Doing harm: Living organ donors, clinical research and the Tenth Man. Journal of Medical Ethics 21: 91–96.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stephen Napier.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Napier, S. Challenging research on human subjects: justice and uncompensated harms. Theor Med Bioeth 34, 29–51 (2013). https://doi.org/10.1007/s11017-013-9241-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11017-013-9241-9

Keywords

Navigation