Hostname: page-component-848d4c4894-p2v8j Total loading time: 0 Render date: 2024-05-01T10:22:28.728Z Has data issue: false hasContentIssue false

Concepts of Risk in Nanomedicine Research

Published online by Cambridge University Press:  01 January 2021

Extract

Risk is the most often cited reason for ethical concern about any medical science or technology, particularly those new technologies that are not yet well understood, or create unfamiliar conditions. In fact, while risk and risk-benefit analyses are but one aspect of ethical oversight, ethical review and risk assessment are sometimes taken to mean the same thing. This is not surprising, since both the Common Rule and Food and Drug Administration (FDA) foreground procedures for minimizing risk for human subjects and require local IRBs to engage in some sort of risk-benefit analysis in decisions to approve or deny proposed research. Existing ethical review and oversight practices are based on the presumption that risk can be clearly identified within the planned activities of the protocol, that metrics can reasonably accurately predict potential hazards, and that mitigation measures can be taken to deal with unintended, harmful, or catastrophic events.

Type
Symposium
Copyright
Copyright © American Society of Law, Medicine and Ethics 2012

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Kimmelman, J., “Valuing Risk: The Ethical Review of Clinical Trial Safety,” Kennedy Institute of Ethics Journal 14, no. 4 (2004): 369393, at 380. See also London, Kimmelman, , and Emborg, , who suggest that IRBs must evaluate the quality of that information and its potential social value as part of the process of ensuring that risks are reasonable. London, A. Kimmelman, J., and Emborg, M., “Beyond Access Versus Protection in Trials of Innovative Therapies,” Science 328, no. 5980 (2010): 828–830, at 830. Whether or not review bodies are pragmatically capable of additionally conducting more sociological review, inclusion of factors beyond protocol specifics would certainly provide a better understanding of potential areas of risk.CrossRefGoogle Scholar
Adams, J., Risk (London: UCL Press, 1995): at 87.Google ScholarPubMed
National Human Genome Research Institute (NHGRI), “Nanodiagnostics and Nanotherapeutics: Building Research Ethics and Oversight,” National Institutes of Health (NIH) grant #1-RC1-HG005338–01 (Wolf, PI; McCullough, Hall, Kahn, Co-Is).Google Scholar
Fatehi, L. Wolf, S. McCullough, J., and Hall, R. et al., “Recommendations for Nanomedicine Human Subjects Research Oversight: An Evolutionary Approach for an Emerging Field,” Journal of Law, Medicine & Ethics 4, no. 4 (2012): 716750.CrossRefGoogle Scholar
Roco, M. C. and Renn, O., “Nanotechnology and the Need for Risk Governance,” Journal of Nanoparticle Research 8, no. 2 (2006): 153191.Google Scholar
Thanks to Catherine Turng for information gleaned from a search using Google Scholar, conducted in June 2012. Readers should note that no further analysis was conducted to determine the nature of the articles; that is, whether they were raising new concerns, suggesting options to deal with nanorisks or refuting suggestions that risks were inherently greater or unique with nanotechnologies.Google Scholar
Garland, D., “The Rise of Risk,” in Ericson, R. and Doyle, A., eds., Risk and Morality (Toronto: University of Toronto Press, 2003): 4886, at 49.CrossRefGoogle Scholar
The history and etymology of the term “risk” comes from commerce (in which the chance for profit had to be countered with chance of loss) and insurance (more related to liability; managed by spreading possibilities of harm or loss across a collective group) and are further described in Ewald, F., “Insurance and Risk,” in Burchell, G. Gordon, C., and Miller, P., eds., The Foucault Effect: Studies in Governmentality (Chicago: University of Chicago Press, 1991) and Hacking, I., “Risk and Dirt,” in Ericson, R. and Doyle, A., eds., Risk and Morality (Toronto: University of Toronto Press, 2003): At 22–47, among others. A classic discussion of the separation of risk from notions of danger, and the implications for governance is found in Castel, R., “From Dangerousness to Risk,” in Burchell, G. Gordon, C., and Miller, P., eds., The Foucault Effect: Studies in Governmentality (Chicago: University of Chicago Press, 1991): At 281–298.Google Scholar
Id. (Ewald), at 199. See also Castel, , Id., at 287.Google Scholar
Beck, U., Risk Society: Towards a New Modernity (London: Sage, 1992); Giddens, A., Modernity and Self-Identity (Cambridge: Polity Press, 1991); Luhmann, N., Risk: A Sociological Theory (New York: A. De Gruyter, 1993).Google Scholar
Dean, M., “Risk, Calculable and Incalculable,” in Lupton, D., ed., Risk and Sociocultural Theory (New York: Oxford University Press, 1999): 131159, at 131. Governmentality is associated with Michel Foucault. See Foucault, M., “Governmentality,” in Burchell, G. Gordon, C., and Miller, P., eds., The Foucault Effect: Studies in Governmentality (Chicago: University of Chicago Press, 1991): At 87–194.CrossRefGoogle Scholar
For an analysis of the rise of risk management particularly in contemporary commercial and governmental organizations, see Power, M., Organized Uncertainty: Designing a World of Risk Management (New York: Oxford University Press, 2007): at 8. Annas, Bostrom and Cirkovic, Van Loon and others have analyzed this situation as a state of urgency and terror management, which has accelerated after 9/11. Where Giddens attempts to reconcile the notion of a social contract of individuals within social institutions, Beck sees instead an increased public mistrust in institutions; they cannot be trusted to protect citizens or lack the capability to do so. Details and critiques of such arguments can be found in: Annas, G., Worst Case Bioethics: Death, Disaster and Public Health (New York: Oxford University Press, 2010); Bostrom, N. and Cirkovic, M., eds., Global Catastrophic Risks (New York: Oxford University Press, 2008); Van Loon, J., Risk and Technological Culture: Towards a Sociology of Virulence (New York: Routledge, 2002). For a discussion of the rhetoric of the apocalyptic in nanotechnology, see Gordijn, B., “Nanoethics: From Utopian Dreams and Apocalyptic Nightmares towards a More Balanced View,” Science and Engineering Ethics 11, no. 4 (2005): 521–533.Google Scholar
Bradbury, J., “The Policy Implications of Differing Concepts of Risk,” Science, Technology and Human Values 14, no. 4 (1989): 380399. “The implicit reification of risk can be seen in the continued attempts to make a distinction between fact and value, between activities of identification and estimation and evaluation on the other. This distinction may be useful as an analytical tool; it is misleading when it assumes that risk identification and estimates represent value-neutral activities and that evaluation may be taken as a separate step.” (Id., at 382).CrossRefGoogle Scholar
Douglas, M. and Wildavsky, A., Risk and Culture (Berkeley, CA: University of California Press, 1982).Google Scholar
Douglas, M., Risk and Blame: Essays in Cultural Theory (New York: Routledge, 1992): at 31.CrossRefGoogle Scholar
See, for example, Cutter, S., “The Vulnerability of Science and the Science of Vulnerability,” Annals of the Association of American Geographers 93, no. 1 (2003): 112.CrossRefGoogle Scholar
Ewald, F., “The Return of Descartes's Malicious Demon: An Outline of a Theory of Precaution,” in Baker, T. and Simon, J., eds., Embracing Risk: The Changing Culture of Insurance and Responsibility (Chicago: University of Chicago Press, 2002): At 273–301.Google Scholar
Serebrov, M., “FDA Faces Challenge of Dealing With Scientific Uncertainty,” available at <http://www.bioworld.com/content/fda-faces-challenge-dealing-scientific-uncertainty-0> (last visited November 8, 2012).+(last+visited+November+8,+2012).>Google Scholar
While experiments are not intended to be therapy, the line is often blurred. See Löwy, I., “Experimental Bodies,” in Cooter, R. and Pickstone, J., eds., Companion to Medicine in the Twentieth Century (New York: Routledge, 2003): At 435–450. For a history of the development of formalized human experimentation including the incorporation of risk and benefit analyses, see Halpern, S., Lesser Harms (Chicago: University of Chicago Press, 2006); Lederer, S., Subjected to Science: Human Experimentation in America before the Second World War (Baltimore: Johns Hopkins University Press, 1995); and Marks, H., The Progress of Experiment: Science and Therapeutic Reform in the United States, 1900–1990 (New York: Cambridge University Press, 1997).Google Scholar
Lederer suggests that the principle of specificity drove the need to test theories and demonstrate effects in humans rather than animals or other laboratory means. She suggests that the growth in bacteriology science was responsible for considerable human experimentation: once identified, a microbe thought to cause human disease had to be tested in humans to confirm that it was that specific microbe, or that a particular mode of transmission was responsible (Id., at 3). In addition to the rapid growth of knowledge in bacteriology, immunology and disease etiologies (particularly cancer), the end of the 19th century and beginning of the 20th particularly after World War I and II, brought a superabundance of new drugs, devices, and surgical innovations whose value could not be verified without systematic research in humans. At the same time, a shift from hospitals as custodial institutions to places in which people could receive more advanced treatments created a place where pools of patients could be tested and monitored.Google Scholar
Id., at 73. It is notable that such guidelines occurred only after years of debate stimulated by anti-vivisection movements to protect animals, and appeared six years after animal protections guidelines were put in place in the United States.Google Scholar
Faden, R. and Beauchamp, T., A History and Theory of Informed Consent (New York: Oxford University Press, 1986): at 76.Google Scholar
Schlich, T., “Risk and Medical Innovation: A Historical Perspective,” in Schlich, T. and Tröhler, U., eds., The Risks of Medical Innovation: Risk Perception and Assessment in Historical Context (New York: Routledge, 2006): 117, at 4.Google Scholar
See Halpern, , supra note 19, at 91.Google Scholar
Id., at 110. On the failure of procedural documents to contain risk, see also Vaughn, D., “Organizational Rituals of Risk and Error,” in Hutter, B. and Power, M., eds., Organizational Encounters with Risk (New York: Cambridge University Press, 2004): At 33–66.Google Scholar
See Barke on balancing risk, benefit, institutional and social goals versus the protection of the individual. Barke, R., “Balancing Uncertain Risks and Benefits in Human Subjects,” Research Science Technology & Human Values 34, no. 3 (2009): 337364.CrossRefGoogle Scholar
Daemmrich, A., “Interleukin-2 from Laboratory to Market,” in Schlich, T. and Tröhler, U., eds., The Risks of Medical Innovation: Risk Perception and Assessment in Historical Context (New York: Routledge, 2006): At 242–261.Google Scholar
Id., at 251. Stock values of Cetus plummeted after the initial failure of FDA approval, and the company was subsequently sold to Chiron. The value of biotech companies is strongly linked to how products fare in regulatory processes, but most analyses examine business risk assessments entirely separately from the way medical risk assessments are conducted. Future empirical work should shed light on these complex, interwoven processes.Google Scholar
Id., at 255.Google Scholar
See, for example, the collection of studies in Renn, O. and Rohrmann, B., eds., Cross-cultural Risk Perception: A Survey of Empirical Studies (Boston: Kluwer Academic Publishers, 2000).CrossRefGoogle Scholar
See Luhmann, , supra note 11.Google Scholar
Hacking, I., The Taming of Chance (New York: Cambridge University Press, 1990).CrossRefGoogle Scholar
Vaughn, D., “Organizational Rituals of Risk and Error,” in Hutter, B. and Power, M., eds., Organizational Encounters with Risk (New York: Cambridge University Press, 2004): At 33–66.Google Scholar
Dresser, R., “Building an Ethical Foundation for First-in-Human Nanotrials,” Journal of Law, Medicine & Ethics 4, no. 4 (2012): 802808; Resnick, D. B. and Tinkle, S. S., “Ethical Issues in Clinical Trials Involving Nanomedicine,” Contemporary Clinical Trials 28, no. 4 (2007): 433–441; Reynolds, W. W. and Nelson, R. M., “Risk Perception and Decision Processes Underlying Informed Consent to Research Participation,” Social Science and Medicine 65, no. 10 (2007): 2105–2115.CrossRefGoogle Scholar
Hansen, S. F. Larsen, B. H. Olsen, S. I., and Baun, A., “Categorization Framework for Aid Hazard Identification of Nanomaterials,” Nanotoxicology 1, no. 3 (2007): 243250; Hansen, S. Maynard, A. Baun, A. Tickner, J. A., and Bowman, D., “Late Lessons from Early Warnings about Nanotechnology,” unpublished manuscript (2012): At 3.CrossRefGoogle Scholar
Perrow, C., Normal Accidents: Living with High Risk Technologies (Princeton, NJ: Princeton University Press, 1984).Google Scholar
Vaughn, D., The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA (Chicago: University of Chicago Press, 1996).Google Scholar
See Fatehi, et al., supra note 4.Google Scholar
Roco, M. C. and Bainbridge, W. S., “Converging Technologies for Improving Human Performance: Integrating from the Nanoscale,” Journal of Nanoparticle Research 4, no. 4 (2002): 281295.CrossRefGoogle Scholar
Fu, J. and Yan, H., “Controlled Drug Release by a Nanorobot,” Nature Biotechnology 30, no. 5 (2012): 407408.CrossRefGoogle Scholar
Aida, T. Meijer, E. W., and Stupp, S. I., “Functional Supramolecular Polymers,” Science 335, no. 6070 (2012): 813817; Ashton, R. S. Keung, A. J. Peltier, J., and Schaffer, D. V., “Progress and Prospects in Stem Cell Engineering,” Annual Review of Chemical and Biomolecular Engineering 2, no. 4 (2011): 479–502; Mata, A. Palmer, L. Tejeda-Montes, E., and Stupp, S. I., “Design of Biomolecules for Nanoengineered Biomaterials for Regenerative Medicine,” Methods in Molecular Biology 811 (2012): 39–49.CrossRefGoogle Scholar
Tasciotti, E. Liu, X. Bhavane, R. Plant, K. Leonard, A. D. Price, B. K. Cheng, M. M. Decuzzi, P. Tour, J. M. Robertson, F., and Ferrari, M., “Mesoporous Silicon Particles as a Multistage Delivery System for Imaging and Therapeutic Applications,” Nature Nanotechnology 3, no. 3 (2008): 151157.CrossRefGoogle Scholar
Shermeta, L., “Nanotechnology and the Ethical Conduct of Research Involving Human Subjects,” Health Law Review 12, no. 3 (2004): 4756.Google Scholar
Hall, R. Sun, T., and Ferrari, M., “A Portrait of Nanomedicine and Its Bioethical Implications,” Journal of Law, Medicine & Ethics 4, no. 4 (2012): 763779. For a discussion of disjunctures of older classificatory ways of thinking about bioactivity and biocompatibility, see Hogle, L. F., “Science, Ethics and the ‘Problems’ of Governing Nanotechnologies,” Journal of Law, Medicine & Ethics 37, no. 4 (2009): 749–758, and Koolage, W. J. and Hall, R., “Chemical Action: What Is It, and Why Does It Really Matter?,” Journal of Nanoparticle Research 13, no. 4 (2011): 1401–1417.CrossRefGoogle Scholar
See Hansen, et al., supra note 36; Grieger, K. D. Hansen, S. F., and Baun, A., “The Known Unknowns of Nanomaterials: Describing and Characterizing Uncertainty within Environmental, Health and Safety Risks,” Nanotoxicology 3, no. 3 (2009): 112.Google Scholar
Bawa, R., “Regulating Nanomedicine – Can the FDA Handle It?” Current Drug Delivery 8, no. 3 (2011): 227234; Miller, J., “Beyond Biotechnology: FDA Regulation of Nanomedicine,” Columbia Science and Technology Law Review 4 (2003): 1–35.CrossRefGoogle Scholar
See Koolage, and Hall, , supra note 44, at 1404.Google Scholar
See Löwy, , supra note 19, at 435.Google Scholar
Slovic, P., The Perception of Risk (New York: Routledge, 2001).Google Scholar
Id. (Slovic); Kasperson, R. E. Renn, O. Slovic, P. Brown, H. S. Emel, J. Gobel, J. R. Kasperson, J., and Ratick, S. F., “The Social Amplification of Risk: A Conceptual Framework,” Risk Analysis 8, no. 2 (1988): 178187. This abbreviated discussion does not do justice to all the arguments made by either psychosocial theorists or behavioral economists. A recent special issue of Risk Analysis (vol. 31, no. 11) reviews analytical concepts as they relate to nanotechnology. See, in particular, Pidgeon, N. Harthorn, B., and Satterfield, T., “Nanotechnology Risk Perceptions and Communication: Emerging Technologies, Emerging Challenges,” Risk Analysis 31, no. 11 (2011): 1694–1700.CrossRefGoogle Scholar
Wilde, G., Target Risk 2: A New Psychology of Safety and Health (Toronto: PDE Publications, 2001).Google Scholar
Legal scholar and advisor to the Obama Administration Cass Sunstein's interpretation of such studies for policy purposes is that regulatory schemes are costly and can cause more problems than they solve, and that (within reason), citizens should be guided through incentives to do things the state wants them to do without limiting their freedom of choice. This “choice architecture” can be instilled in law and policy. His stance, which he labels “libertarian paternalism,” is described in Sunstein, C., Worst-Case Scenarios (Cambridge, MA: Harvard University Press, 2004).Google Scholar
See Dresser, and Resnik, and Tinkle, , supra note 35.Google Scholar
See Cutter, , supra note 17.Google Scholar
Wynne, B., “Unruly Technology: Practical Rules, Impractical Discourses and Public Understanding,” Social Studies of Science 18, no. 1 (1988): 147167.CrossRefGoogle Scholar
See Fatehi, et al., supra note 4.Google Scholar
DeVille, K., “Law, Regulation and the Medical Use of Nanotechnology,” in Jotterand, F., ed., Emerging Conceptual, Ethical and Policy Issues in Bionanotechnology, Philosophy and Medicine vol. 101 (Dordrecht: Springer, 2008): At 181–200; Sanhai, W. Spiegel, J., and Ferrari, M., “A Critical Path Approach to Advance Nanoengineered Medical Products,” Drug Discovery Today: Technologies 4, no. 2 (2007): 35–41.Google Scholar
See Kimmelman, , supra note 1.Google Scholar
Power, M., “Risk and Morality,” in Ericson, R. and Doyle, A., eds., Risk Management and the Responsible Organization (Toronto: University of Toronto Press, 2003): 145164, at 150. Many firms have begun to use triple bottom line or value statements in their annual reporting, as a way of creating public statements of accountability. Increasingly, various stakeholders (public or special interest groups, investors, and others) are identified by firms as a potential source of risk for planned projects. Having such public statements and a visible risk management scheme is seen as one way to proactively manage risk.Google Scholar
See Fatehi, et al., supra note 4.Google Scholar