Abstract
Developers are often the engine behind the creation and implementation of new technologies, including in the artificial intelligence surge that is currently underway. In many cases these new technologies introduce significant risk to affected stakeholders; risks that can be reduced and mitigated by such a dominant party. This is fully recognized by texts that analyze risks in the current AI transformation, which suggest voluntary adoption of ethical standards and imposing ethical standards via regulation and oversight as tools to compel developers to reduce such risks. However, what these texts usually sidestep is the question of how aware developers are to the risks they are creating with these new AI technologies, and what their attitudes are towards such risks. This paper asks to rectify this gap in research, by analyzing an ongoing case study. Focusing on six Israeli AI startups in the field of radiology, I carry out a content analysis of their online material in order to examine these companies’ stances towards the potential threat their automated tools pose to patient safety and to the work-standing of healthcare professionals. Results show that these developers are aware of the risks their AI products pose, but tend to deny their own role in the technological transformation and dismiss or downplay the risks to stakeholders. I conclude by tying these findings back to current risk-reduction recommendations with regards to advanced AI technologies, and suggest which of them hold more promise in light of developers’ attitudes.
Similar content being viewed by others
Data availability
All material used comply with field standards.
Code availability
Not applicable.
Notes
This was both my personal experience, in a large number of attempts to secure such interviews, and the experience of colleagues in a research forum on big data, privacy and surveillance, as well as in other forums.
It should be noted that developers are quite a heterogeneous group. It contains subgroups which may operate under different constraints, different logics, and even different ethical standards. For instance, we may divide developers according to business-driven and profession-driven positions. CEOs, CFOs and marketing personnel may be driven almost entirely by the bottom line and company value considerations, while other professionals, such as programmers, software engineers and CTOs may be driven by standard operation procedures, professional guidelines, and technical challenges. Still, and what justifies bunching the two categories together, is that the latter need to conform to the business logic of the company if they wish to stay on its payroll. This is especially true in startup settings in which the existence of the company is dependent of reaching pre-determined milestones; and in which individual compensation is tied into company performance, usually via the issuance of ‘options’.
In this text I am dealing with changes to the risks patients face in the current AI transformation, with a focus on a case study (radiology) in which AI technologies keep doctors as mediators between the AI and patients. However, some current AI healthcare technologies come in direct contact with the patients and thus produce additional risk factors, which are outside the scope of this article, and are covered in a growing body of work that deals with the physician–patient relationship (e.g. Dalton-Brown, 2020).
A good equivalent is the factory. Automation did not eliminate all factory jobs, but significantly reduced the number of workers required to produce each unit of output, and led to the replacement of some of the skilled jobs in the plant with unskilled jobs.
The clearest indication that this is a probable outcome is looking at radiology’s recent history. Radiology’s previous technological revolution was one in which film and analogue systems were replaced by a totally digital process. Except maybe in the very short run, the efficiency gains that were made by this digital transformation, which were significant (e.g. Langen et al., 2003; Nitrosi et al., 2007), were not enjoyed by the medical staff, which explains why radiologists find themselves overworked, overloaded and burned out nowadays once again (Chetlen et al., 2019; Harolds et al., 2016; Rimmer, 2017).
Specifically, the term ‘out of a job’ or one of its alternatives such as ‘make workers obsolete’ or ‘redundant’ are used in these discussions. These and others terms informed the analysis of the ventures’ websites. For the full list of key terms see Appendix 1.
References
Academy of Medical Royal Colleges. (2019). Artificial intelligence in healthcare. Academy of Medical Royal Colleges.
Acemoglu, D., & Autor, D. (2010). Skills, tasks and technologies Implications for employment and earnings (Working Paper No. 16082). National Bureau of Economic Research.
Ahuja, A. S. (2019). The impact of artificial intelligence in medicine on the future role of the physician. PeerJ, 7, e7702. https://doi.org/10.7717/peerj.7702
Anderson, P. N. (2004). What rights are eclipsed when risk is defined by corporatism?: Governance and GM food. Theory, Culture & Society, 21(6), 155–169. https://doi.org/10.1177/0263276404050460
Beck, U. (1992). Risk society: Towards a new modernity. Sage Publications.
Bostick, T. P., Holzer, T. H., & Sarkani, S. (2017). Enabling stakeholder involvement in coastal disaster resilience planning. Risk Analysis, 37(6), 1181–1200. https://doi.org/10.1111/risa.12737
Bradbury, J. A. (1989). The policy implications of differing concepts of risk. Science, Technology & Human Values, 14(4), 380–399. https://doi.org/10.1177/016224398901400404
Callon, M., Lascoumes, P., & Barthe, Y. (2009). Acting in an uncertain world: An essay on technical democracy. MIT Press.
Carvalho, A. (2008). Media(ted) discourse and society. Journalism Studies, 9(2), 161–177. https://doi.org/10.1080/14616700701848162
CB Information Services. (2021). State of healthcare report: Investment & sector trends to watch. CB Information Services.
Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ Quality & Safety, 28(3), 231–237. https://doi.org/10.1136/bmjqs-2018-008370
Chetlen, A. L., Chan, T. L., Ballard, D. H., Frigini, L. A., Hildebrand, A., Kim, S., et al. (2019). Addressing burnout in radiologists. Academic Radiology, 26(4), 526–533. https://doi.org/10.1016/j.acra.2018.07.001
Choy, G., Khalilzadeh, O., Michalski, M., Do, S., Samir, A. E., Pianykh, O. S., et al. (2018). Current applications and future impact of machine learning in radiology. Radiology, 288(2), 318–328. https://doi.org/10.1148/radiol.2018171820
Coiera, E. W. (2015). Technology, cognition and error. BMJ Quality & Safety, 24(7), 417–422. https://doi.org/10.1136/bmjqs-2014-003484
Collier, M., Fu, R., Yin, L., & Christiansen, P. (2017). Artificial intelligence: Healthcare’s new nervous system. Accenture Health.
Crawford, K., & Whittaker, M. (2016). The AI now report: The social and economic implications of artificial intelligence technologies in the near-term. AI Now Institute.
Dalton-Brown, S. (2020). The ethics of medical AI and the physician-patient relationship. Cambridge Quarterly of Healthcare Ethics, 29(1), 115–121. https://doi.org/10.1017/S0963180119000847
Datta Burton, S., Mahfoud, T., Aicardi, C., & Rose, N. (2021). Clinical translation of computational brain models: Understanding the salience of trust in clinician-researcher relationships. Interdisciplinary Science Reviews, 46(1–2), 138–157. https://doi.org/10.1080/03080188.2020.1840223
Drake, F. (2011). Protesting mobile phone masts: Risk, neoliberalism, and governmentality. Science, Technology & Human Values, 36(4), 522–548. https://doi.org/10.1177/0162243910366149
Evans, R., & Plows, A. (2007). Listening without prejudice? Re-discovering the value of the disinterested citizen. Social Studies of Science, 37(6), 827–853.
Fairclough, N. (2012). Critical discourse analysis. International Advances in Engineering and Technology, 7, 452–487.
Feenberg, A. (2005). Critical theory of technology: An overview. Tailoring Biotechnologies, 1(1), 47–64.
Frazzini, R. (2001). Technology impact: Some thoughts on deskilling and design responsibility. IEEE Control Systems, 21(1), 8–12. https://doi.org/10.1109/37.898787
Galloway, S. (2017). The four: The hidden DNA of Amazon, Apple, Facebook and Google. Penguin Books.
Gillen, M. W. (2008). Degradation of Piloting Skills (MS dissertation). University of North Dakota, Grand Forks, ND
Gong, B., Nugent, J. P., Guest, W., Parker, W., Chang, P. J., Khosa, F., & Nicolaou, S. (2019). Influence of artificial intelligence on Canadian medical students’ preference for radiology specialty: A National survey study. Academic Radiology, 26(4), 566–577. https://doi.org/10.1016/j.acra.2018.10.007
Gray, A. 2017. These charts will change how you see the rise of artificial intelligence. World Economic Forum. Retrieved from https://www.weforum.org/agenda/2017/12/charts-artificial-intelligence-ai-index/
Hall, C. (2021). Forecast: Health care in 2021 will focus on ’digitization of the patient experience’. Crunchbase News 5
Hamlett, P. W. (2003). Technology theory and deliberative democracy. Science, Technology & Human Values, 28(1), 112–140.
Harolds, J. A., Parikh, J. R., Bluth, E. I., Dutton, S. C., & Recht, M. (2016). Burnout of radiologists: Frequency, risk factors, and remedies: A report of the ACR commission on human resources. Journal of the American College of Radiology, 13(4), 411–416. https://doi.org/10.1016/j.jacr.2015.11.003
Holford, W. D. (2020). An ethical inquiry of the effect of cockpit automation on the responsibilities of airline pilots: Dissonance or meaningful control? Journal of Business Ethics. https://doi.org/10.1007/s10551-020-04640-z
Hurwitz, L. B., Alvarez, A. L., Lauricella, A. R., Rousse, T. H., Montague, H., & Wartella, E. (2018). Content analysis across new media platforms: Methodological considerations for capturing media-rich data. New Media & Society, 20(2), 532–548. https://doi.org/10.1177/1461444816663927
Irwin, A. (2001). Constructing the scientific citizen: Science and democracy in the biosciences. Public Understanding of Science, 10(1), 1–18. https://doi.org/10.3109/a036852
Irwin, A., & Michael, M. (2003). Science, social theory & public knowledge. Open University Press.
Israel Advanced Technology Industries. (2018). Israel’s life sciences industry IATI report 2018. Israel Advanced Technology Industries.
Israel’s Prime Minister’s Office. (2018). Government has approved the national plan for digital health as a national growth engine. Israel’s Prime Minister’s Office.
Jasanoff, S. (1998). The political science of risk perception. Reliability Engineering & System Safety, 59(1), 91–99. https://doi.org/10.1016/S0951-8320(97)00129-4
Jasanoff, S. (2002). Citizens at risk: Cultures of modernity in the US and EU. Science as Culture, 11(3), 363–380. https://doi.org/10.1080/0950543022000005087
Jasanoff, S. (2003). Technologies of humility: Citizen participation in governing science. Minerva, 41(3), 223–244.
Kraemer, F., van Overveld, K., & Peterson, M. (2011). Is there an ethics of algorithms? Ethics and Information Technology, 13(3), 251. https://doi.org/10.1007/s10676-010-9233-7
Kristal, T. (2013). The capitalist machine: Computerization, workers’ power, and the decline in labor’s share within US industries. American Sociological Review, 78(3), 361–389. https://doi.org/10.1177/0003122413481351
Lahsen, M. (2005). Technocracy, democracy, and US climate politics: The need for demarcations. Science, Technology & Human Values, 30(1), 137–169.
Langen, H., Bielmeier, J., Wittenberg, G., Selbach, R., & Feustel, H. (2003). Workflow improvement and efficiency gain with near total digitalization of a radiology department. Röfo, 175(10), 1309. https://doi.org/10.1055/s-2003-42889
Latour, B. (2004). Why has critique run out of steam? From matters of fact to matters of concern. Critical Inquiry, 30(2), 225–248. https://doi.org/10.1086/421123
Liew, C. (2018). The future of radiology augmented with artificial intelligence: A strategy for success. European Journal of Radiology, 102, 152–156. https://doi.org/10.1016/j.ejrad.2018.03.019
Macnamara, J. (2005). Media content analysis: Its uses, benefits and best practice methodology. Asia Pacific Public Relations Journal, 6(1), 1–34.
Macrae, C. (2019). Governing the safety of artificial intelligence in healthcare. BMJ Quality & Safety. https://doi.org/10.1136/bmjqs-2019-009484
Marres, N. (2007). The issues deserve more credit: Pragmatist contributions to the study of public involvement in controversy. Social Studies of Science, 37(5), 759–780.
Mayring, P. (2000). Qualitative content analysis. Forum: Qualitative Social Research. https://doi.org/10.17169/fqs-1.2.1089
Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. PublicAffairs.
Nawrocki, T., Maldjian, P. D., Slasky, S. E., & Contractor, S. G. (2018). Artificial intelligence and radiology: Have rumors of the radiologist’s demise been greatly exaggerated? Academic Radiology, 25(8), 967–972. https://doi.org/10.1016/j.acra.2017.12.027
Nitrosi, A., Borasi, G., Nicoli, F., Modigliani, G., Botti, A., Bertolini, M., & Notari, P. (2007). A filmless radiology department in a full digital regional hospital: Quantitative evaluation of the increased quality and efficiency. Journal of Digital Imaging, 20(2), 140. https://doi.org/10.1007/s10278-007-9006-y
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
Perhac, R. M. (1998). Comparative risk assessment: Where does the public fit in? Science, Technology & Human Values, 23(2), 221–241.
Pesapane, F., Codari, M., & Sardanelli, F. (2018). Artificial intelligence in medical imaging: Threat or opportunity? Radiologists again at the forefront of innovation in medicine. European Radiology Experimental, 2(1), 35. https://doi.org/10.1186/s41747-018-0061-6
Petropoulos, G. (2018). The impact of artificial intelligence on employment. In M. Neufeind, J. O’Reilly, & F. Ranft (Eds.), Work in the digital age: Challenges of the fourth industrial revolution (pp. 119–132). Rowman & Littlefield Publishers.
Recht, M., & Bryan, R. N. (2017). Artificial intelligence: Threat or boon to radiologists? Journal of the American College of Radiology, 14(11), 1476–1480. https://doi.org/10.1016/j.jacr.2017.07.007
Rességuier, A., & Rodrigues, R. (2020). AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society, 7(2), 2053951720942541. https://doi.org/10.1177/2053951720942541
Rimmer, A. (2017). Radiologist shortage leaves patient care at risk, warns royal college. BMJ, 359, j4683. https://doi.org/10.1136/bmj.j4683
Rogovin, L. (2018). Israel’s digital health industry in 2018. Start-Up Nation Central.
Shoham, Y., Perrault, R., Brynjolfsson, E., Clark, J., Manyika, J., Niebles, J. C., et al. (2018). The AI index 2018 annual report. Human-Centered AI Initiative.
Short, K. G. (2017). Critical content analysis as a research methodology. In H. Johnson, J. Mathis, & K. G. Short (Eds.), Critical content analysis of children’s and young adult literature: Reframing perspective (pp. 1–15). Routledge.
Singer, D. (2018). The Israeli AI healthcare startup landscape of 2018. StartupHub.ai. Retrieved from https://www.startuphub.ai/israeli-ai-healthcare-startups-2018/. Accessed 29 May 2019.
Stein, R. L. (2017). Gopro occupation: Networked cameras, Israeli military rule, and the digital promise. Current Anthropology, 58(S15), S56–S64. https://doi.org/10.1086/688869
Stenekes, N., Colebatch, H. K., Waite, T. D., & Ashbolt, N. J. (2017). An empirical agent-based model to simulate the adoption of water reuse using the social amplification of risk framework. Science, Technology & Human Values, 37(10), 2005–2022.
Stirling, A. (2008). “Opening up” and “closing down”: Power, participation, and pluralism in the social appraisal of technology. Science, Technology & Human Values, 33(2), 262–294.
Taplin, J. T. (2017). Move fast and break things: How Facebook, Google, and Amazon cornered culture and undermined democracy. Hachette Book Group.
The European Commission Proposal for a Regulation COM(2021) 206; 2021/0106 of the European Parliament and of the Council Laying Down Harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts
The European Commission Regulation 2016/679 of the European Parliament and of the Council on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC
Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44–56. https://doi.org/10.1038/s41591-018-0300-7
U.S. Executive Office of the President. (2016). Preparing for the future of artificial intelligence. U.S. Executive Office of the President.
U.S. Food & Drug Administration. (2019). Proposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD)—Discussion paper and request for feedback. U.S. Food & Drug Administration.
U.S. Food & Drug Administration. (2021). Artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) action plan. U.S. Food & Drug Administration.
Ulnicane, I., Eke, D. O., Knight, W., Ogoh, G., & Stahl, B. C. (2021). Good governance as a response to discontents? Déjà vu, or lessons for AI from other emerging technologies. Interdisciplinary Science Reviews, 46(1–2), 71–93. https://doi.org/10.1080/03080188.2020.1840220
Urban, G., & Koh, K.-N. (2013). Ethnographic research on modern business corporations. Annual Review of Anthropology, 42, 139–158. https://doi.org/10.1146/annurev-anthro-092412-155506
Van Dijk, T. A. (2001). Critical discourse analysis. In D. Schiffrin, D. Tannen, & H. E. Hamilton (Eds.), The handbook of discourse analysis (pp. 352–371). Wiley.
Verhage, A. (2009). Corporations as a blind spot in research: Explanations for a criminological tunnel vision. In M. Cools (Ed.), Contemporary issues in the empirical study of crime (pp. 79–108). Maklu.
Volz, K., Yang, E., Dudley, R., Lynch, E., Dropps, M., & Dorneich, M. C. (2016). An evaluation of cognitive skill degradation in information automation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1), 191–195. https://doi.org/10.1177/1541931213601043
Walach, E., & Cannavo, M. J. 2019. Integrating AI into the radiology workflow: Do’s and Dont’s. Aidoc YouTube Channel. Retrieved from https://www.youtube.com/watch?v=pStiOeNZHR4&feature=emb_title. Accessed 22 Mar 2020.
Weiss, G., & Wodak, R. (2003). Introduction: Theory, interdisciplinarity and critical discourse analysis. In G. Weiss & R. Wodak (Eds.), Critical discourse analysis: Theory and interdisciplinarity (pp. 1–32). Palgrave Macmillan.
Yu, K.-H., & Kohane, I. S. (2019). Framing the challenges of artificial intelligence in medicine. BMJ Quality & Safety, 28(3), 238–241. https://doi.org/10.1136/bmjqs-2018-008551
Zimmerman, A. D. (1995). Toward a more democratic ethic of technological governance. Science, Technology & Human Values, 20(1), 86–107.
Acknowledgements
Author would like to thank David S. Jones, Joost van Loon, Klaus Hoeyer, Zeev Rosenhek, Amy Fairchild, Dani Filc, and the two anonymous reviewers for their helpful comments on earlier drafts of this article. Special thanks to Lauren Duke for her valuable insights and assistance.
Funding
Not applicable.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
Not known conflicts.
Ethical approval
Not applicable.
Consent to participate
Not applicable.
Consent for publication
Not applicable.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix 1: key terms
Appendix 1: key terms
The following table lists the key risk-related terms used in the analysis of the examined website sections. Next to some of the terms I added text in round parenthesis to clarify the term.
Category | Terms |
---|---|
General | Cost (to developers or to stakeholders), ethics, harm, pace (of development), price (of development), public, risk, social |
Patient-safety | Accuracy, (AI) bias, black box, complex/ity, decision support, deskilling, distribution shift, efficiency, error, error rate, failure, fail safe, false negative, false positive, FDA, final decision, frame problem, limit/ed, out of sample, oversight, regulation, (AI) robustness, safe/ty, second reader, sensitivity, specificity, transparency |
Healthcare workers’ position | Automation, burnout, compensation, empower (caregivers), expedite (processes), increasing demand, increasing load, jobs, job erosion, job loss, job satisfaction, (put someone) out of work, (make workers) obsolete, overload, polarization, quicken (processes), reduce demand (from workers), reduce time, (make workers) redundant, replace (workforce), salaries, scarcity (of workers), shortage, shrinking (workforce), speed (processes), take over, throughput (of the radiology unit), wages, workload, work/life balance, turnaround time |
Rights and permissions
About this article
Cite this article
Duke, S.A. Deny, dismiss and downplay: developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI. Ethics Inf Technol 24, 1 (2022). https://doi.org/10.1007/s10676-022-09627-0
Accepted:
Published:
DOI: https://doi.org/10.1007/s10676-022-09627-0