Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts

Journal of Medical Ethics 48 (4):222-229 (2022)
  Copy   BIBTEX

Abstract

In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion in clinical processes, weighing the benefits of its efficiency against concerns of responsibility attribution. Fourth, we provide a ‘rule of disagreement’ that fulfils these conditions while retaining some of the benefits of expanding the use of AI-based decision support systems in clinical contexts. This is because the rule of disagreement proposes to use AI as much as possible, but retain the ability to use human second opinions to resolve disagreements between AI and physician-in-charge. Fifth, we discuss some counterarguments.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 94,593

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2022-03-23

Downloads
49 (#321,282)

6 months
19 (#182,156)

Historical graph of downloads
How can I increase my downloads?

Author's Profile