Article Text

Download PDFPDF

Future pandemics and the urge to ‘do something’
  1. Adam Lerner1,
  2. Nir Eyal2
  1. 1Center for Population-Level Bioethics, Rutgers The State University of New Jersey, New Brunswick, New Jersey, USA
  2. 2Center for Population-Level Bioethics, Department of Philosophy (SAS) and Department of HBSP (SPH), Rutgers University, New Brunswick, New Jersey, USA
  1. Correspondence to Dr Adam Lerner, Center for Population-Level Bioethics, Rutgers The State University of New Jersey, New Brunswick, New Jersey, USA; lerner.adam.jared{at}gmail.com

Abstract

Research with enhanced potential pandemic pathogens (ePPP) makes pathogens substantially more lethal, communicable, immunosuppressive or otherwise capable of triggering a pandemic. We briefly relay an existing argument that the benefits of ePPP research do not outweigh its risks and then consider why proponents of these arguments continue to confidently endorse them. We argue that these endorsements may well be the product of common cognitive biases—in which case they would provide no challenge to the argument against ePPP research. If the case against ePPP research is strong, the views of professional experts do little to move the needle in favour of ePPP research.

  • Biological Warfare Agents
  • Communicable Diseases
  • Decision Making
  • Ethics- Research
  • Ethics Committees

Data availability statement

There are no data in this work.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

New chapter in the debate on ePPP research

Research with enhanced potential pandemic pathogens (ePPP) makes pathogens substantially more lethal, communicable, immunosuppressive or otherwise capable of triggering a pandemic. Such research may help protect against natural pandemics, but ‘can be inherently high risk given the possibility of biosafety lapses or deliberate misuse’.1 Multiple prominent experts therefore consider ePPP research to be a major pandemic risk in itself.2–4

On 1 September 2023, the US Office for Science and Technology Policy announced potential changes to the policies for oversight of dual-use research of concern (DURC) and the potential pandemic pathogen care and oversight policy framework.5 Those follow the recommendations, from March, of the US National Science Advisory Board for Biosecurity. Both recommend bringing more studies under review, but not banning ePPP research or DURC.5

Proponents of ePPP research insist that their work is essential.6 As 150 virologist proponents put it earlier this year, ‘In many cases, gain-of-function research-of-concern can very clearly advance pandemic preparedness and the development of vaccines and antivirals. These tangible benefits often far outweigh the theoretical risks posed by modified viruses’.7 We summarise an existing argument that these risk-benefit assessments are incorrect and then investigate why proponents believe otherwise. We then argue that their beliefs may well be the products of common cognitive biases—in which case they provide no challenge to arguments against ePPP research.

Background: an existing argument

Despite the benevolent aims of ePPP research, there is a strong prima facie case that ePPP research increases pandemic risks more than it reduces them. This is because it carries a non-trivial probability of triggering a human-engineered pandemic, which would constitute an unprecedented catastrophe.

Although they have not offered details justifying their assessments, the US Department of Energy and Federal Bureau of Investigation have concluded with low to moderate confidence that a lab spill started COVID-19. But even if lab spills have not yet resulted in a pandemic, COVID-19 included, we should not take that as evidence that they will not result in future pandemics—after all, spills in labs studying dangerous viruses have occurred, and no safety measure is foolproof.8 9

So we should take seriously the possibility that ePPP research could launch a pandemic via an accidental spill. Even so, accidents are not the only way, or even the most likely way, that ePPP research could result in a pandemic: globally, ‘perhaps 30 000 individuals with doctorates currently possess the skills to follow the most straightforward virus assembly protocols’.4(p. 11) If ePPP research gave them the genomic sequences of pathogens with pandemic potential (as biological researchers have done in the past), almost any of them could eventually use that knowledge, along with CRISPR-Cas9, artificial intelligence, other emerging technologies, and increasingly long DNA fragments to develop enhanced pathogens. Dispersion of these technologies only increases with time. As a result, in the not-too-distant future, malevolent actors will likely have the means to create and release recombinant viruses in global transportation hubs with relative ease. Further ePPP research thus conditions our avoidance of a devastating engineered pandemic on the goodwill and fastidiousness of each and every one of tens of thousands of researchers—a fragile protection.4

A nontrivial probability of an unprecedented catastrophe makes for an extremely high risk. And engineered pandemics (resulting from viruses made possible by ePPP research) would be unprecedented catastrophes. That is because malevolent actors would mobilise new knowledge from ePPP research to design viruses that combine the lethality of the generally most lethal viruses, the communicability of the generally most communicable ones, and other traits that tend to augment destructive and disruptive potential. As difficult as this may be, they could release many viruses until one succeeds. The resulting death toll would be orders of magnitude larger than those of past pandemics.4

Some scholars question whether ePPP research has the benefits that proponents ascribe to it.4 8 But even if it has those benefits, reducing natural pandemic deaths numbering thousands (e.g. 2009 H1N1) or millions (e.g. COVID-19) is disadvantageous if it non-trivially increases the risk of a virus killing billions.4 One analyst, relying on the conservative Bayesian assumption that there is a 1% annual risk that one of tens of thousands of actors with the know-how and means would turn out to be omnicidal enough to create such a virus, concluded that, in expectation, ‘credible pandemic virus identification will kill a hundred people for every person it might save’.4

On balance, then, there is a serious argument against conducting, funding or allowing most ePPP research. Well-intended ePPP research normally increases net fatality risk considerably, instead of decreasing it. It is like drinking salt water to quench thirst. It exacerbates the problem it was intended to solve.

This argument may have caused you to raise an eyebrow—after all, the world has never experienced a pandemic that kills billions of humans. However, only a crude view of nature prohibits believing that the future may differ in important ways from the past: ‘The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken’.10(p. 63)

This argument has been defended with detailed calculations of the balance of risks and benefits.4 9 In response, proponents of ePPP research recently asserted only that the ‘tangible benefits often far outweigh the theoretical risks’ with no supporting quantitative analysis.7(p. 5) Proponents list the potential benefits of ePPP research and concede its potential harms,7 but they rarely quantify the prospective benefits of ePPP research, let alone show that those exceed the prospective harms. One attempt to do so11 was the target of significant challenges.9 But even if it accurately estimates risks of work performed in an environment where strict regulations are followed, we cannot extrapolate to other places where regulations may be looser or followed less closely. Consequently, proponents rarely cite this or any other attempt to quantify the risks and benefits of ePPP research. This raises the questions, then, of why so many proponents of ePPP research dismiss the possibility that the benefits of ePPP research might not outweigh the risks, and whether we should trust their assertion that the benefits do outweigh the risks.

How reliable are proponents’ assertions that the benefits of ePPP research outweigh its risks?

In many areas of medicine, it makes sense to defer to expert (e.g. clinical) judgement, even if clinicians cannot articulate the experience and wisdom that ground their judgement. However, this is not true when we have reason to believe that expert judgements are likely to be compromised by biases that undermine their reliability, as they often are.12

And we have such reason in the case of experts who support ePPP research.

First, processes generating expert intuitions could underestimate the probability of a human-engineered pandemic. In response to concerns about risky research, proponents cite existing regulatory structures as if they could not fail or be evaded by a malevolent actor.7 Such underappreciation may reflect the sheer fact that we have never experienced a devastating engineered pandemic (that the risk is ‘theoretical’ not ‘tangible’). It may also reflect the common tendency to assign too little weight to what are perceived as small probability events, acting as if the probability were zero.13 The combined bias is especially common in risky decisions that one has made frequently without experiencing harm14—precisely the situation of proponent researchers who have never unleashed a pandemic. Proponents’ trope that ‘nature is the ultimate bioterrorist and we need to do all we can to stay one step ahead’ neglects what actual bioterrorists could do with future capabilities.15 In light of these considerations, there is a high probability that proponents are underestimating the probability that ePPP research will produce engineered pandemics.

Moving to the magnitude of harm, we are all prone to ‘psychic numbing’: as the number of fatalities grows, our intuitions become less sensitive to differences between fewer fatalities and more fatalities; we place less than twice the weight on preventing two deaths than on preventing one, and when the number of potential fatalities gets too large to convey emotional meaning, our concerns may fade and even collapse completely.16 By extension, ordinary people’s and experts’ intuitions place far less than a thousand times more weight on preventing, say, billions of fatalities than on preventing millions of fatalities, although preventing a billion fatalities is roughly a thousand times more important than preventing a million fatalities. They thus fail to represent that the harms of a human-engineered pandemic are orders of magnitude greater than those of a natural pandemic—although they are.

These biases alone cast doubt on the accuracy of proponents’ calculation-free assurances that ePPP research will do more good than harm. But there is a third, less familiar influence that may explain these common assurances: the urge to ‘do something’,17 which leads to so-called ‘action bias’.18

In a classic study by Angela Fagerlin and colleagues, respondents were asked to imagine that they had been diagnosed with cancer that had a 5% chance of killing them. They had to choose between treatment and watchful waiting. Some respondents were given the option to immediately cure the cancer with surgery, but were informed that surgery increased their overall risk of death to 10% (i.e. they are more likely to die from the surgery than from the untreated cancer). Most respondents replied that they would choose surgery. Given respondents’ explanations of their choices, the authors concluded: ‘Few people can imagine standing by and doing nothing after being diagnosed with cancer’.17

Medicine is rife with tests and treatments that are considered ‘low-value’ or ‘overuse’ in the sense that their prospective medical harm to the patient exceeds any prospective medical benefit to her.19 Such ‘action bias’ occurs in antibiotic use, Caesarean sections, orthopaedic surgeries, cancer screening,20 cancer treatment (unnecessary thyroidectomies, prostatectomies, mastectomies, hysterectomies)20 and invasive heart procedures. US Preventive Services Task Force ‘Evidence Updates’ regularly expose new cases of overuse. Some overuse is driven by the financial interests of for-profit providers,19 just as some support for ePPP research may reflect researcher incentives to fund their own research. But overuse is often driven by benevolent clinicians and patients adamant to ‘do something’ about the patient’s looming health risks—sometimes in the face of strong evidence that they are increasing net risk.

The urge to do something can be so overwhelming that it leads people to ignore evidence on relative risks, or even deny its credibility. In a frivolous but instructive analogy, soccer goalies almost always jump to the right or left when trying to stop penalty kicks, ignoring substantial evidence that moving away from the middle of the net significantly increases their risk of being scored on.18 In studies on attitudes towards antibiotics, some people retained their desire for antibiotics by denying the evidence that antibiotics would be unhelpful and potentially detrimental.21

These findings make it a serious possibility that the urge to ‘do something’ and other biases explain why many people endorse ePPP research. We are all human. Just as cancer patients have trouble standing by and doing nothing when facing the looming threat of cancer, compassionate researchers may have trouble standing by and doing nothing when facing the looming threat of natural pandemics. Because this may well be the source of their assertions that the benefits outweigh the risks, then these assertions should not be relied on, especially for this high-stakes decision.

Back to assessing ePPP research

We cannot just rely on proponents’ assurances that any risks of ePPP research are worth incurring. Such assertions are too likely to be the products of bias. The debate about ePPP research must be settled on other grounds. Rigorous and more comprehensive risk assessments9—as challenging as they may be—are a good start.

Data availability statement

There are no data in this work.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

Acknowledgments

For helpful comments on previous drafts, the authors are grateful to Marc Lipsitch, Paul Slovic, and Peter Ubel.

References

Footnotes

  • AL and NE contributed equally.

  • Correction notice Since this paper first published, a funding and acknowledgment statement has been added.

  • Contributors Both authors contributed equally to the final MS. NE acts as guarantor.

  • Funding AL and NE’s work on this essay was supported by a grant from Longview Philanthropy (2039320).

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.