Skip to main content
Log in

Ethics and Phishing Experiments

  • Original Paper
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

Phishing is a fraudulent form of email that solicits personal or financial information from the recipient, such as a password, username, or social security or bank account number. The scammer may use the illicitly obtained information to steal the victim’s money or identity or sell the information to another party. The direct costs of phishing on consumers are exceptionally high and have risen substantially over the past 12 years. Phishing experiments that simulate real world conditions can provide cybersecurity experts with valuable knowledge they can use to develop effective countermeasures and prevent people from being duped by phishing emails. Although these experiments contravene widely accepted informed consent requirements and involve deception, we argue that they can be conducted ethically if risks are minimized, confidentiality and privacy are protected, potential participants have an opportunity to opt out of the research before it begins, and human subjects are debriefed after their participation ends.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. There are several different types of phishing attacks (Federal Trade Commission 2017; Ponemon Institute 2015). In the article, we focus on what is often referred to as “spear phishing.”

  2. Organizations may also use these approaches for training employees in how to avoid phishing attacks, but we will focus on research activities in this paper.

  3. This approach is similar to the penetration testing methodologies described by Dimkov et al. (2010). A penetration test is an attempt to gain access to an organization’s secure information. The purpose of the test is to obtain knowledge that will help the organization improve its security. A physical penetration test is an attempt to gain access to information by physical means, such as removing a laptop from the organization or using a USB drive to download information. To provide information that is accurate and reliable, penetration tests should model real world conditions and therefore may need to include some deception of employees.

  4. It is worth noting that the Common Rule does not apply to an organization that receives no U.S. federal funding for research involving human subjects. We would argue that organizations should voluntarily comply with the ethical principles underlying the Common Rule when conducting phishing experiments with human subjects, even if they are not required to do so by law.

  5. References are to the 2009 version of the Common Rule. On 19 January 2017, the Obama Administration published long-awaited revisions to the Common Rule; however, the Trump Administration may make additional changes to these regulations or delay their implementation. The changes to the Common Rule do not impact the discussion of phishing experiments in this paper because they do not affect waivers of informed consent requirements for social or behavioral research. Although the changes include a new category of social/behavioral research exempted from the regulations, i.e. research involving benign interventions, this exemption only applies if the subjects prospectively agree to the intervention, which would not occur in the phishing experiments discussed herein (Department of Homeland Security et al. 2017).

  6. In other countries these committees may be called research ethics boards or research ethics committees.

  7. We assume the organization would have information about email users’ age.

  8. One way to help resolve this issue would be to conduct studies that compare opt-in and opt-out procedures to determine whether either method has a substantial enrollment bias. However, it may be difficult to obtain data for these studies because researchers will not be asking subjects for their consent and therefore will not have access to important information that might bias subject selection. The researchers in our proposed study would probably have access to some demographic information about the subjects, but they would not have data pertaining to other important variables, such as their awareness of or cybersecurity issues or their attitudes toward research participation.

  9. Milgram’s experiments involved two types of human subjects, learners and teachers. The teachers presented the learners with lists of word-pairs they were supposed to memorize. The learners were hooked up to a machine that appeared to be capable of giving them an electric shock. The investigators instructed the teachers to administer a shock to the learners whenever they gave an incorrect answer. Most of the teachers continued administering shocks even when the learners cried out in pain and asked the experiment to stop. In reality, the learners never received a shock. The purpose of the experiment was to determine whether the teachers would obey instructions to give a shock to the learners. The teachers consented to participating in the study, but they were not told they were being deceived. Milgram debriefed the learners after their participation was complete and explained the true nature of the experiment to them. See Milgram (1974) for further discussion.

  10. “Social phishing” is more ethically problematic than the phishing experiments discussed in this paper since it involves two people who are involved in research without consent, i.e. the recipient and the friend.

  11. We recognize that smaller samples may not have enough statistical power to achieve significance results. To deal with this issue, investigators should carefully select a sample size that is adequately powered but also is not so large that in-person debriefing is impractical.

  12. Milgram’s experiments involved authorized participation not authorized deception. See footnote 3.

References

  • Annas, G. J. (2000). Rules for research on human genetic variation—Lessons from Iceland. New England Journal of Medicine, 342(24), 1830–1833.

    Article  Google Scholar 

  • Bayer, R., Gostin, L. O., Jennings, B., & Steinbock, B. (Eds.). (2006). Public health ethics: Theory, policy, and practice. New York, NY: Oxford University Press.

    Google Scholar 

  • Beauchamp, T., & Childress, J. (2008). Principles of biomedical ethics (8th ed.). New York, NY: Oxford University Press.

    Google Scholar 

  • Benham, B. (2008). The ubiquity of deception and the ethics of deceptive research. Bioethics, 22(3), 147–156.

    Article  Google Scholar 

  • Boynton, M. H., Portnoy, D. B., & Johnson, B. T. (2013). Exploring the ethics and psychological impact of deception in psychological research. IRB, 35(2), 7–13.

    Google Scholar 

  • Brock, D. E. (2008). Philosophical justifications of informed consent. In E. J. Emanuel, C. Grady, R. A. Crouch, R. K. Lie, F. G. Miller, & D. Wendler (Eds.), The Oxford textbook of clinical research ethics (pp. 606–612). New York, NY: Oxford University Press.

    Google Scholar 

  • Buchanan, E., Aycock, J., Dexter, S., Dittrich, D., & Hvizdak, E. (2011). Computer science security research and human subjects: Emerging considerations for research ethics boards. Journal of Empirical Research on Human Research Ethics, 6(2), 71–83.

    Article  Google Scholar 

  • Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council of Canada. (2014). Tri-Council policy statement: Ethical conduct for research involving humans. http://www.pre.ethics.gc.ca/pdf/eng/tcps2-2014/TCPS_2_FINAL_Web.pdf. Accessed 14 July 2107.

  • Cassell, J., & Young, A. (2002). Why we should not seek individual informed consent for participation in health services research. Journal of Medical Ethics, 28(5), 313–317.

    Article  Google Scholar 

  • Council for International Organizations of Medical Sciences. (2016). International ethical guidelines for health-related research involving humans. https://cioms.ch/shop/product/international-ethical-guidelines-for-health-related-research-involving-humans/. Accessed 12 July 2017.

  • Department of Health and Human Services. (2009). Protection of Human Subjects, 45 Code of Federal Regulations 46.

  • Department of Homeland Security; Department of Agriculture; Department of Energy; National Aeronautics and Space Administration; Department of Commerce; Social Security Administration; Agency for International Development; Department of Housing and Urban Development; Department of Labor; Department of Defense; Department of Education; Department of Veterans Affairs; Environmental Protection Agency; Department of Health and Human Services; National Science Foundation; and Department of Transportation. (2017). Federal policy for the protection of human subjects. Federal Register, 82(12), 7149–7274.

    Google Scholar 

  • Dimkov, T., Pieters, W., & Hartel, P. (2010). Two methodologies for physical penetration testing using social engineering. In Proceedings of the annual computer security applications conference (pp. 399–408). New York, NY: American Chemical Society.

  • Dworkin, G. (1988). The theory and practice of autonomy. Cambridge, UK: Cambridge University Press.

    Book  Google Scholar 

  • El-Din, R. S. (2012). To deceive or not to deceive! Ethical questions in phishing research. In Proceedings of the British Computing Society, human–computer interaction 2012 Workshops. http://ewic.bcs.org/upload/pdf/ewic_hci12_ec_paper2.pdf. Accessed 26 April 2017.

  • Emanuel, E. J., Wendler, D., & Grady, C. (2000). What makes clinical research ethical? Journal of the American Medical Association, 283(20), 2701–2711.

    Article  Google Scholar 

  • Epley, N., & Huff, C. (1998). Suspicion, affective response, and educational benefit as a result of deception in psychology research. Personality and Social Psychology Bulletin, 24(7), 759–768.

    Article  Google Scholar 

  • Federal Trade Commission. (2017). Phishing. https://www.consumer.ftc.gov/articles/0003-phishing. Accessed 26 April 2017.

  • Feinberg, J. (1987). Harm to others. New York, NY: Oxford University Press.

    Google Scholar 

  • Finn, P. R. (1995). The ethics of deception in research. In R. L. Penslar (Ed.), Research ethics: Cases and materials (pp. 87–118). Bloomington, IN: Indiana University Press.

    Google Scholar 

  • Finn, P. R., & Jakobsson, M. (2007). Designing ethical phishing experiments. IEEE Technology and Society, 26, 46–58.

    Article  Google Scholar 

  • Gartner Group. (2007). Phishing costs the U.S. economy $3.2 billion. Press release, 17 December 2007. https://www.finextra.com/news/fullstory.aspx?newsitemid=17871. Accessed 12 July 2017.

  • Gelinas, L., Wertheimer, A., & Miller, F. G. (2016). When and why is research without consent permissible? Hastings Center Report, 46(2), 35–43.

    Article  Google Scholar 

  • Gostin, L. O. (2007). General justifications for public health regulation. Public Health, 121(11), 829–834.

    Article  Google Scholar 

  • Hertwig, R., & Ortmann, A. (2008). Deception in social psychological experiments: Two misconceptions and a research agenda. Social Psychology Quarterly, 71(3), 222–227.

    Article  Google Scholar 

  • Jagatic, T. N., Johnson, N. A., Jakobsson, M., & Menczer, F. (2006). Social phishing. Communications of the ACM, 50(10), 94–100.

    Article  Google Scholar 

  • Junghans, C., Feder, G., Hemingway, H., Timmis, A., & Jones, M. (2005). Recruiting patients to medical research: Double blind randomised trial of “opt-in” versus “opt-out” strategies. British Medical Journal, 331(7522), 940.

    Article  Google Scholar 

  • MacKay, D. (2015). Opt-out and consent. Journal of Medical Ethics, 41(10), 832–835.

    Article  Google Scholar 

  • Milgram, S. (1974). Obedience to authority. New York, NY: Harper and Rowe.

    Google Scholar 

  • Miller, F. G. (2008). Research on medical records without informed consent. Journal of Law, Medicine & Ethics, 36(3), 560–566.

    Article  Google Scholar 

  • Miller, F. G., & Emanuel, E. J. (2008). Quality-improvement research and informed consent. New England Journal of Medicine, 358(8), 765–767.

    Article  Google Scholar 

  • Miller, F. G., Gluck, J. P., Jr., & Wendler, D. (2008). Debriefing and accountability in deceptive research. Kennedy Institute of Ethics Journal, 18(3), 235–251.

    Article  Google Scholar 

  • National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. Washington, DC: Department of Health, Education, and Welfare.

    Google Scholar 

  • Oczak, M., & Niedźwieńska, A. (2007). Debriefing in deceptive research: A proposed new procedure. Journal of Empirical Research on Human Research Ethics, 2(3), 49–59.

    Article  Google Scholar 

  • Pihl, R., Zacchia, C., & Zeichner, A. (1981). Follow-up analysis of the use of deception and aversive contingencies in psychological experiments. Psychological Reports, 48(3), 927–930.

    Article  Google Scholar 

  • Ponemon Institute. (2015). The costs of phishing & value of employee training. https://info.wombatsecurity.com/hubfs/Ponemon_Institute_Cost_of_Phishing.pdf?t=1499361243887. Accessed 6 July 2017.

  • Selgelid, M. J. (2005). Ethics and infectious disease. Bioethics, 19(3), 272–289.

    Article  Google Scholar 

  • Smith, S. S., & Richardson, D. (1983). Amelioration of deception and harm in psychological research: The important role of debriefing. Journal of Personality and Social Psychology, 44(5), 1075–1082.

    Article  Google Scholar 

  • Sobel, A. (1978). Deception in social science research: Is informed consent possible? Hastings Center Report, 8(5), 40–45.

    Article  Google Scholar 

  • Soliday, E., & Stanton, A. L. (1995). Deceived versus nondeceived participants’ perceptions of scientific and applied psychology. Ethics and Behavior, 5(1), 87–104.

    Article  Google Scholar 

  • United Kingdom, Department of Health. (2005). Research governance framework for health and social care. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/139565/dh_4122427.pdf. Accessed 14 July 2017.

  • Vellinga, A., Cormican, M., Hanahoe, B., Bennett, K., & Murphy, A. W. (2011). Opt-out as an acceptable method of obtaining consent in medical research: A short report. BMC Medical Research Methodology, 11, 40.

    Article  Google Scholar 

  • Wendler, D., & Miller, F. G. (2008). Deception in research. In E. J. Emanuel, C. Grady, R. A. Crouch, R. K. Lie, F. G. Miller, & D. Wendler (Eds.), The Oxford textbook of clinical research ethics (pp. 315–324). New York, NY: Oxford University Press.

    Google Scholar 

  • World Medical Association. (2013). Declaration of Helsinki: Ethical principles for medical research involving human subjects. https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/. Accessed 12 July 2017.

Download references

Acknowledgements

This research was funded by the Intramural Program of the National Institute of Environmental Health Sciences (NIEHS), National Institutes of Health (NIH) (ZIAES-102646-08). It does not represent the views of the NIEHS, NIH, or U.S. government.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David B. Resnik.

Ethics declarations

Conflict of interest

The authors have no conflicts of interest to disclose.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Resnik, D.B., Finn, P.R. Ethics and Phishing Experiments. Sci Eng Ethics 24, 1241–1252 (2018). https://doi.org/10.1007/s11948-017-9952-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11948-017-9952-9

Keywords

Navigation