Skip to main content
Log in

Probability and informed consent

  • Published:
Theoretical Medicine and Bioethics Aims and scope Submit manuscript

Abstract

In this paper, we illustrate some serious difficulties involved in conveying information about uncertain risks and securing informed consent for risky interventions in a clinical setting. We argue that in order to secure informed consent for a medical intervention, physicians often need to do more than report a bare, numerical probability value. When probabilities are given, securing informed consent generally requires communicating how probability expressions are to be interpreted and communicating something about the quality and quantity of the evidence for the probabilities reported. Patients may also require guidance on how probability claims may or may not be relevant to their decisions, and physicians should be ready to help patients understand these issues.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Here we follow authors who have argued that comprehension of the information disclosed is an important component of informed consent, and that medical practitioners should help their patients comprehend the information they disclose through effective communication [3; 4, ch. 4]. However, some authors have raised worries about aspects of the requirement to ensure understanding as part of informed consent [5,6,7].

  2. We are neutral with respect to whether one should use verbal probability expressions (such as ‘likely’ or ‘probable’) or numerical probability expressions (such as ‘20-percent chance’). For recent literature on the use of verbal and numerical probability expressions, see [8,9,10,11,12,13,14,15,16,17]. We are also neutral with respect to several related issues having to do with the presentation of probabilistic information. We are neutral with respect to whether one should use a point estimate or a probability range, for which see [18,19,20,21]. We are neutral with respect to whether one should use symbolic-algebraic representations or iconic-geometric representations, for which see [22,23,24]. And we are neutral with respect to what features of patients, such as numeracy [25], affect their understanding of probability reports. The issues we raise in this paper arise regardless of how these incredibly important and interesting debates are ultimately settled. In this connection, this literature suggests that there is no agreed-upon clinical guidance or accepted standard for how physicians are required to communicate the probabilities of different outcomes or risks involved in procedures.

  3. Katz is quoting from [28].

  4. Patient autonomy as a justification for informed consent has been endorsed by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research [34], and the President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research [35]. For additional discussion of the relations between autonomy and informed consent, see [29; 36, ch. 7].

  5. For similar suggestions, see also [39, 40]. For a paper that lays out an argument for informed consent based on building trust, but that attacks that argument and the idea more generally, see [41].

  6. One might think that physicians should provide only numerically precise frequency statements to patients because statements about natural frequencies are better understood (because more ecologically valid) than decimal percentages. For a recent meta-analysis of work on the natural frequency facilitation effect, see [14]. As with other presentation format issues, we are neutral with respect to whether probabilistic information should be presented as a natural frequency or as a decimal. We maintain that using natural frequencies does not solve all of the problems. We will return to this point later.

  7. Paul Han argues that different types of probability are important to communicate to patients. He focuses primarily on the difference between what he calls “epistemic” and “aleatory” uncertainty (subjective confidence versus known risk) rather than on the difference between Bayesian and Frequentist interpretations. Like us, he concludes that greater conceptual clarity is required for adequate communication [44].

  8. Patients might waive the default obligation if, like Han Solo, they do not want anyone to tell them the odds. But patients need to be afforded the opportunity to learn about their risks and to understand what they are giving up when they waive the physician’s obligation.

  9. There are many ways to work out the details of each main approach. For examples of varieties of Frequentism, see [45, 46]. For examples of varieties of Bayesianism, see [47].

  10. A referee wondered whether our claim here is really correct and suggested that the physician could be making a claim about the frequency of success in a range of related cases. At a first pass, we want to resist this suggestion and say that the imagined physician is not making a claim about the actual frequencies with respect to that specific kind of procedure. Our interlocutor could respond by saying that the physician includes the new procedure in a wider reference class. But if that response works, then the procedure is not best thought of as a new one that has never been tried. Going a bit further, suppose the physician says, “There is a 20% chance of death with this procedure.” If the physician means that in some wider reference class, death results 20% of the time, it seems to us to be important that the patient knows both that the claim is about a wider reference class (since Gricean conversational maxims would ordinarily suggest that the physician’s claim is maximally specific) and also what the reference class really is. Moreover, if the physician’s claim is about the frequency of success for some different but related procedures, then, in addition to ambiguity in the physician’s claim, there is an important inferential problem, which we discuss in more detail in the subsection below.

  11. One might think of the issue here as a consequence of the fact that consent is a propositional attitude with intensional context, for which see [48]. A patient might assent to undergoing a procedure that has a 70% probability of success but not a procedure that succeeds 70 times out of 100.

  12. What we have in mind here is similar to what is sometimes called “ambiguity” [44]. However, we think there is an important distinction to be made between evaluating the quantity and quality of evidence for a probability claim and the complete uncertainty about probability (what we would count as genuine ambiguity) that figures in Ellsburg’s paradox and related puzzles in decision theory. While some worry that ambiguity aversion will lead patients to avoid decision making if they are presented with higher-order probabilities [44], other studies suggest that in cases where the quantity and quality of evidence is good, “including weight of evidence content … attenuates perceived information uncertainty” [49, p. 1302].

  13. A referee wondered whether a physician pleading ignorance could be the basis for valid consent. Up front, we take pleading ignorance to be incompatible with the demand in [C2]. In a broad range of typical therapeutic, clinical cases—probably in most of them—we think pleading ignorance would be impermissible, since it would either mean that the physician was not competent to advise the patient or that the physician was untruthful and unwilling to show proper concern for the patient’s interests. However, not all cases are the same, and in especially difficult or unusual cases or in cases where everything has been tried, even competent and caring physicians may have no good estimate to give. In those cases, disclosing ignorance is probably required and is perhaps also sufficient to obtain informed consent. Hence, we think that the requirement in [C2] is probably too strong. In experimental research contexts, we are much less sure what to say. We think that uncertainty still needs to be disclosed, but we are not sure that the form of such disclosure will be similar to the therapeutic, clinical context.

  14. By “evidence for a probability claim,” we here mean evidence that bears on an estimate of the population mean, that is, limiting frequency. When the sample is small, we have lower quality estimates of the limiting frequency.

  15. Peirce used the label “weight of evidence” to refer to what we would call the “quantity” of the evidence [50]. For further discussion, see [51, ch. 6; 52, 53, ch. 14]. For discussion of relatively recent use of the phrase “weight of evidence” in biomedical science, see [54].

  16. Hoekstra et al. modeled their study on a famous study by Gerd Gigerenzer, which provided empirical evidence that researchers frequently misinterpret p-values, another mainstay of Frequentist statistics [56]. In their study, Hoekstra et al. told a story about a professor who reports a 95% confidence interval of (0.1, 0.4) for a mean value being estimated. They then asked 118 researchers to say whether each of six statements was true or false. The number of researchers reporting the wrong answer ranged from 45 to 102. For example, 70 out of 118 endorsed the false claim that there is a 95% probability that the true mean lies between 0.1 and 0.4, and 68 out of 118 endorsed the false claim that if we were to repeat the experiment over and over, then 95% of the time the true mean falls between 0.1 and 0.4. The first of these mistakes a Frequentist confidence interval for a Bayesian credible interval. The second treats the confidence interval as fixed and the parameter being estimated as variable, whereas the Frequentist thinks of the interval as variable and the parameter as fixed [55].

  17. A referee observes that even the idea of what is needed by patients presents serious difficulties, since what is needed by some patients might not be needed by others. It is not clear, then, whether the requirement is to provide all the information that this patient needs or all the information that a reasonable patient would need or all the information that a suitably large percentage of ordinary patients need or something else. But since here we are being critical of the idea of simply presenting all the needed information, we take the referee’s point to be grist for our mill.

  18. Even if one is a Bayesian, the sampling procedure may matter in special cases [57].

  19. In the circumstances described, she applies Laplace’s Rule of Succession and says that the probability of injury on the next procedure is equal to (m + 1))/(n + 2), where m is the number of injuries observed out of n procedures. For more on the Rule of Succession, see [58, ch. 2].

  20. Traditionally, Bayesian credences have been interpreted as betting odds. If you are 70% sure that an event will occur, it means (roughly) you personally are willing to pay up to 70 cents for a bet that returns $1 if the event occurs and nothing otherwise. Although such an interpretation is illustrative, it is not without problems. In particular, it seems distasteful (at the least) to bet on whether a patient's operation will be successful. Nonetheless, we think the betting odds framework helps highlight the difference between the Frequentist and Bayesian approaches.

  21. A referee suggested that physicians could avoid the difficulty of expert disagreement by simply telling patients that other experts might have different views and then reminding patients that they can talk with other physicians before proceeding. In some cases, we think that would be sufficient, but there are many circumstances where it would not. For example, advising patients about the possibility of consulting with other physicians may not be adequate if one is a rural doctor seeing a patient who has very limited options or if one’s patient is poor or has insurance that limits their ability to see other physicians or if the patient needs to act quickly. And we expect there are other similar kinds of cases. Even if the recommendation worked, however, it would concede the point we want to make here: that a bare numerical probability statement is not sufficient to secure informed consent.

  22. Professional or expert judgment about risk is an important issue in areas outside biomedical ethics, as well. For some discussion of expert judgment in engineering ethics, see [59]. In certain cases, physicians may well come close to consensus even if they have trouble articulating precisely what their evidential basis is. However, since physicians disagree relatively often, such cases are far from universal.

  23. Simpson’s Paradox occurs when an association between two variables disappears or reverses conditional on a third variable. The threat of Simpson’s Paradox is non-trivial. Hanley and Thériault reported an example of Simpson’s Paradox in meta-analyses of randomized controlled trials [60]. Nissen and Wolski found a significant increase in the risk of myocardial infarctions in groups taking Rosiglitazone over the control groups over a number of individual studies. However, when data from all the studies were pooled, the Rosiglitazone group had a slightly lower rate of MIs compared to the control [61].

  24. The numbers for our toy example are based on a famous real-life study by C. R. Charig et al. [62] that compared different methods for treating kidney stones. The unscrupulousness is novel to our story.

  25. For our purposes, a “causal structure” is a pattern of causal relations that hold with respect to some domain. Minimally, we take a causal relation to reveal how an outcome depends on changes in actions one could take. One would want to know, for example, what would happen to the success rate if one were to choose open surgery rather than a less invasive option. For introductions to some of the issues here, see [63, 64].

  26. In the language of Pearl’s ‘do calculus,’ a report of the value of Pr(Y = y | X = x) is action-guiding only if it is a guide to the value of Pr(Y = y | do(X = x)) [65].

  27. The case of Arato v Avedon, which deals explicitly with the relationship between informed consent and statistical information, turned in part on the perception among oncologists that many patients did not want to be told the hard truth about their condition [67].

  28. This is in line with empirical findings regarding effective communication of first-order uncertainty, for which consult the literature noted in fn. 2.

  29. Paul Han et al. have developed and evaluated an experience-based clinical risk communication curriculum for medical students, which, although resource-intensive, was efficacious [68].

References

  1. Onora, O’Neill. 2002. Autonomy and trust in bioethics. Cambridge: Cambridge University Press.

    Google Scholar 

  2. Millum, Joseph, and Danielle Bromwich. 2018. Understanding, communication, and consent. Ergo 5: 45–68.

    Google Scholar 

  3. Manson, Neil C., and Onora O’Neill. 2007. Rethinking informed consent in bioethics. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  4. Beauchamp, Tom L., and James F. Childress. 2008. Principles of biomedical ethics, 6th ed. Oxford: Oxford University Press.

    Google Scholar 

  5. Sreenivasan, Gopal. 2003. Does informed consent to research require comprehension? Lancet 362: 2016–2018.

    Article  Google Scholar 

  6. Miller, Franklin G., and Alan Wertheimer. 2011. The fair transaction model of informed consent: An alternative to autonomous authorization. Kennedy Institute of Ethics Journal 21: 201–218.

    Article  Google Scholar 

  7. Walker, Tom. 2012. Informed consent and the requirement to ensure understanding. Journal of Applied Philosophy 29: 50–62.

    Article  Google Scholar 

  8. Christopher, M.M., C.S. Hotz, S.M. Shelly, and P.D. Pion. 2010. Interpretation by clinicians of probability expressions in cytology reports and effect on clinical decision-making. Journal of veterinary internal medicine 24: 496–503.

    Article  Google Scholar 

  9. Collins, Peter J., and Ulrike Hahn. 2018. Communicating and reasoning with verbal probability expressions. Psychology of Learning and Motivation 69: 67–105.

    Article  Google Scholar 

  10. Hamann, J., R. Mendel, M. Bühner, S. Leucht, and W. Kissling. 2011. Drowning in numbers–what psychiatrists mean when talking to patients about probabilities of risks and benefits of medication. European Psychiatry 26: 130–131.

    Article  Google Scholar 

  11. Hanauer, David A., Yang Liu, Qiaozhu Mei, Frank J. Manion, Ulysses J. Balis, and Kai Zheng. 2012. Hedging their mets: The use of uncertainty terms in clinical documents and its potential implications when sharing the documents with patients. AMIA Annual Symposium Proceedings 2012: 321–330.

    Google Scholar 

  12. Jenkins, Sarah C., Adam J. L. Harris, and R.M. Lark. 2018. Understanding ‘unlikely (20% likelihood)’or ‘20% likelihood (unlikely)’outcomes: The robustness of the extremity effect. Journal of Behavioral Decision Making 31: 572–586.

    Article  Google Scholar 

  13. Karelitz, Tzur M., V. David, and Budescu. 2004. You say “probable” and I say “likely”: Improving interpersonal communication with verbal probability phrases. Journal of Experimental Psychology: Applied 10: 25–41.

    Google Scholar 

  14. McDowell, Michelle, and Perke Jacobs. 2017. Meta-analysis of the effect of natural frequencies on Bayesian reasoning. Psychological Bulletin 143: 1273–1312.

    Article  Google Scholar 

  15. Spiegelhalter, David. 2017. Risk and uncertainty communication. Annual Review of Statistics and Its Application 4: 31–60.

    Article  Google Scholar 

  16. Tait, Alan R., Terri Voepel-Lewis, Brian J. Zikmund-Fisher, and Angela Fagerlin. 2010. Presenting research risks and benefits to parents: Does format matter? Anesthesia and Analgesia 111: 718–723.

    Article  Google Scholar 

  17. Vahabi, Mandana. 2010. Verbal versus numerical probabilities: Does format presentation of probabilistic information regarding breast cancer screening affect women’s comprehension? Health Education Journal 69: 150–163.

    Article  Google Scholar 

  18. Dieckmann, Nathan F., Robert Mauro, and Paul Slovic. 2010. The effects of presenting imprecise probabilities in intelligence forecasts. Risk Analysis: An International Journal 30: 987–1001.

    Article  Google Scholar 

  19. Jenkins, Sarah C., Adam J. L. Harris, and R. Murray Lark. 2018. When unlikely outcomes occur: The role of communication format in maintaining communicator credibility. Journal of Risk Research 22: 537–554.

    Article  Google Scholar 

  20. Longman, Thea, M. Robin, Madeleine King Turner, and Kirsten J. McCaffery. 2012. The effects of communicating uncertainty in quantitative health risk estimates. Patient Education and Counseling 89: 252–259.

    Article  Google Scholar 

  21. Sladakovic, Jovana, Jesse Jansen, Jolyn Hersch, Robin Turner, and Kirsten McCaffery. 2016. The differential effects of presenting uncertainty around benefits and harms on treatment decision making. Patient Education and Counseling 99: 974–980.

    Article  Google Scholar 

  22. Lipkus, Isaac M. 2007. Numeric, verbal, and visual formats of conveying health risks: Suggested best practices and future recommendations. Medical Decision Making 27: 696–713.

    Article  Google Scholar 

  23. Spiegelhalter, David, Mike Pearson, and Ian Short. 2011. Visualizing uncertainty about the future. Science 333: 1393–1400.

    Article  Google Scholar 

  24. Zipkin, Daniella A., Craig A. Umscheid, Nancy L. Keating, Elizabeth Allen, KoKo. Aung, Rebecca Beyth, Scott Kaatz, Devin M. Mann, Jeremy B. Sussman, Deborah Korenstein, Connie Schardt, Avishek Nagi, Richard Sloane, and David A. Feldstein. 2014. Evidence-based risk communication: A systematic review. Annals of internal medicine 161: 270–280.

    Article  Google Scholar 

  25. Nelson, Wendy, Valerie F. Reyna, Angela Fagerlin, Issac Lipkus, and Ellen Peters. 2008. Clinical implications of numeracy: Theory and practice. Annals of Behavioral Medicine 35: 261–274.

    Article  Google Scholar 

  26. Brock, Dan W. 1987. Informed consent. In Health care ethics: An introduction. eds. Donald Van de Veer and Tom Regan, 98–126. Philadelphia: Temple University Press.

    Google Scholar 

  27. Katz, Jay. 1984. The silent world of doctor and patient. New York: The Free Press.

    Google Scholar 

  28. Natanson v. Kline. 1960. 186 Kan. 393, 350 P.2d 1093.

  29. Young, Robert. 2009. Informed consent and patient autonomy. In A companion to bioethics. eds. Helga Kuhse and Peter Singer, 2nd ed., 530–540. Malden, MA & Oxford: Wiley-Blackwell.

  30. Berg, Jessica W., Paul S. Appelbaum, Charles W. Lidz, and Lisa S. Parker. 2001. Informed consent: Legal theory and clinical practice, 2nd ed. New York: Oxford University Press.

    Google Scholar 

  31. Faden, Ruth R., and Tom L. Beauchamp. 1986. A history and theory of informed consent. New York: Oxford University Press.

    Google Scholar 

  32. Beauchamp, Tom L. 2010. Autonomy and consent. In The ethics of consent: Theory and practice. eds. Franklin G. Miller and Alan Wertheimer, 55–78. New York: Oxford University Press.

    Google Scholar 

  33. Sawicki, Nadia N. 2016. Modernized informed consent: Expanding the boundaries of materiality. University of Illinois Law Review 2016: 821–871.

    Google Scholar 

  34. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. 1979. The Belmont Report. Federal Register 44: 23192–23197.

    Google Scholar 

  35. President’s Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research. 1982. Making health care decisions: The ethical and legal implications of informed consent in the patient-practitioner relationship. Washington, D.C.: U.S. Government Printing Office.

    Google Scholar 

  36. Dworkin, Gerald. 1988. The theory and practice of autonomy. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  37. Archard, David. 2008. Informed consent: Autonomy and self-ownership. Journal of Applied Philosophy 25: 19–34.

    Article  Google Scholar 

  38. Bromwich, Danielle, and Joseph Millum. 2015. Disclosure and consent to medical research participation. Journal of Moral Philosophy 12: 195–219.

    Article  Google Scholar 

  39. Tännsjö, Torbjörn. 1999. Coercive care: the ethics of choice in health and medicine. London and New York: Routledge.

    Google Scholar 

  40. Jennifer, Jackson. 2001. Truth, trust and medicine. London and New York: Routledge.

    Google Scholar 

  41. Eyal, Nir. 2014. Using informed consent to save trust. Journal of Medical Ethics 40: 437–444.

    Article  Google Scholar 

  42. Jamieson, Denise J., Caroline Costello, James Trussell, Susan D. Hillis, Polly A. Marchbanks, and Herbert B. Peterson. 2004. The risk of pregnancy after vasectomy. Obstetrics & Gynecology 103: 848–850.

    Article  Google Scholar 

  43. Cook, T.M., D. Counsell, and J.A.W. Wildsmith. 2009. Major complications of central neuraxial block: Report on the third national audit project of the royal college of anaesthetists. British Journal of Anaesthesia 102: 179–190.

    Article  Google Scholar 

  44. Han, Paul K. J. 2013. Conceptual, methodological, and ethical problems in communicating uncertainty in clinical evidence. Medical Care Research and Review 70: 14S-36S.

    Article  Google Scholar 

  45. Hájek, Alan. 1996. “Mises Redux”—Redux: Fifteen arguments against finite frequentism. Erkenntnis 45: 209–227.

    Article  Google Scholar 

  46. Hájek, Alan. 2009. Fifteen arguments against hypothetical frequentism. Erkenntnis 70: 211–235.

    Article  Google Scholar 

  47. Good, I.J. 1971. 46656 varieties of Bayesians. The American Statistician 25: 62–63.

    Google Scholar 

  48. O’Neill, Onora. 2003. Some limits of informed consent. Journal of Medical Ethics 29: 4–7.

    Article  Google Scholar 

  49. Clarke, Christopher E., Brooke Weberling McKeever, Avery Holton, and Graham N. Dixon. 2015. The influence of weight-of-evidence messages on (vaccine) attitudes: A sequential mediation model. Journal of Health Communication 20: 1302–1309.

    Article  Google Scholar 

  50. Peirce, Charles S. 1878. The probability of induction. Popular Science Monthly 12: 705–718.

    Google Scholar 

  51. Keynes, John M. 1921. A treatise on probability. London: Macmillan and Company.

    Google Scholar 

  52. Good, I.J. 1985. Weight of evidence: A brief survey. Bayesian Statistics 2: 249–269.

    Google Scholar 

  53. Bradley, Richard. 2016. Decision theory with a human face. Cambridge: Cambridge University Press.

    Google Scholar 

  54. Weed, Douglas L. 2005. Weight of evidence: A review of concept and methods. Risk Analysis 25: 1545–1557.

    Article  Google Scholar 

  55. Hoekstra, Rink, Richard D. Morey, Jeffrey N. Rouder, and Eric-Jan. Wagenmakers. 2014. Robust misinterpretation of confidence intervals. Psychonomic Bulletin & Review 21: 1157–1164.

    Article  Google Scholar 

  56. Gigerenzer, Gerd. 2004. Mindless statistics. The Journal of Socio-Economics 33: 587–606.

    Article  Google Scholar 

  57. Steel, Daniel. 2003. A Bayesian way to make stopping rules matter. Erkenntnis 58: 213–227.

    Article  Google Scholar 

  58. Zabell, S.L. 2005. Symmetry and its discontents: Essays on the history of inductive probability. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  59. Murphy, Collen, and Paolo Gardoni. 2008. The acceptability and the tolerability of societal risks: A capabilities-based approach. Science and Engineering Ethics 14: 77–92.

    Article  Google Scholar 

  60. Hanley, James A., and Gilles Thériault. 2000. Simpson’s paradox in meta-analysis. Epidemiology 11: 613–614.

    Article  Google Scholar 

  61. Nissen, Steven E., and Kathy Wolski. 2007. Effect of rosiglitazone on the risk of myocardial infarction and death from cardiovascular causes. New England Journal of Medicine 356: 2457–2471.

    Article  Google Scholar 

  62. Charig, C.R., D.R. Webb, S.R. Payne, and J.E.A. Wickham. 1986. Comparison of treatment of renal calculi by open surgery, percutaneous nephrolithotomy, and extracorporeal shockwave lithotripsy. British Medical Journal (Clinical Research Edition) 292: 879–882.

    Article  Google Scholar 

  63. Cartwright, Nancy. 1979. Causal laws and effective strategies. Nous 13: 419–437.

    Article  Google Scholar 

  64. Woodward, James. 2003. Making things happen. Oxford: Oxford University Press.

    Google Scholar 

  65. Pearl, Judea. 2009. Causality: Models, reasoning, and inference, 2nd ed. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  66. Jamieson, Denise J., Steven C. Kaufman, Caroline Costello, Susan D. Hillis, Polly A. Marchbanks, Herbert B. Peterson, U.S. Collaborative Review of Sterilization Working Group. 2002. A comparison of women’s regret after vasectomy versus tubal sterilization. Obstetrics and Gynecology 99: 1073–1079.

    Google Scholar 

  67. Arato v. Avedon. 1993. 5 Cal. 4th 1172, 23 Cal. Rptr. 2d 131, 858 P.2d 598.

  68. Han, Paul K. J., Katherine Joekes, Glyn Elwyn, Kathleen M. Mazor, Richard Thomson, Philip Sedgwick, Judith Ibison, and John B. Wong. 2014. Development and evaluation of a risk communication curriculum for medical students. Patient Education and Counseling 94: 43–49.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank the following individuals for invaluable feedback on an early draft of this paper: Richard Bradley, Kathleen Creel, Samuel Fletcher, Conor Mayo-Wilson, Colleen Murphy, and Jonah Schupbach. We would also like to thank audiences at workshops at the Romanell Center for Clinical Ethics and the Philosophy of Medicine and at the Rotman Institute of Philosophy for their excellent feedback. Finally, we are grateful to two anonymous referees for Theoretical Medicine and Bioethics for detailed comments that aided in substantially improving the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nir Ben-Moshe.

Ethics declarations

Conflict of interest

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ben-Moshe, N., Levinstein, B.A. & Livengood, J. Probability and informed consent. Theor Med Bioeth 44, 545–566 (2023). https://doi.org/10.1007/s11017-023-09636-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11017-023-09636-0

Keywords

Navigation