Skip to content
Publicly Available Published by De Gruyter April 22, 2021

A moral analysis of intelligent decision-support systems in diagnostics through the lens of Luciano Floridi’s information ethics

  • Dmytro Mykhailov ORCID logo
From the journal Human Affairs

Abstract

Contemporary medical diagnostics has a dynamic moral landscape, which includes a variety of agents, factors, and components. A significant part of this landscape is composed of information technologies that play a vital role in doctors’ decision-making. This paper focuses on the so-called Intelligent Decision-Support System that is widely implemented in the domain of contemporary medical diagnosis. The purpose of this article is twofold. First, I will show that the IDSS may be considered a moral agent in the practice of medicine today. To develop this idea I will introduce the approach to artificial agency provided by Luciano Floridi. Simultaneously, I will situate this approach in the context of contemporary discussions regarding the nature of artificial agency. It is argued here that the IDSS possesses a specific sort of agency, includes several agent features (e.g. autonomy, interactivity, adaptability), and hence, performs an autonomous behavior, which may have a substantial moral impact on the patient’s well-being. It follows that, through the technology of artificial neural networks combined with ‘deep learning’ mechanisms, the IDSS tool achieves a specific sort of independence (autonomy) and may possess a certain type of moral agency. Second, I will provide a conceptual framework for the ethical evaluation of the moral impact that the IDSS may have on the doctor’s decision-making and, consequently, on the patient’s wellbeing. This framework is the Object-Oriented Model of Moral Action developed by Luciano Floridi. Although this model appears in many contemporary discussions in the field of information and computer ethics, it has not yet been applied to the medical domain. This paper addresses this gap and seeks to reveal the hidden potentialities of the OOP model for the field of medical diagnosis.

Introduction

Within the last few decades, our world has experienced the accelerating emergence of new AI systems. These systems differ significantly from other technological objects because of their ability to behave and act independently from the human designers who have created them. Moreover, nowadays, these technological objects play a vital role in different layers of our collective practices and accomplish a diverse array of tasks with varying levels of complexity. The field of medical diagnostics is one of these practices. Todays’ medical technologies are complex, given the variety of functions they perform, such as pattern recognition, problem-solving, and decision making. In the human domain, doctors must deal with equally complex functions ranging from interpreting facts obtained from patient interviews and clinical examinations to comparing the results of laboratory exams, all the while using a combination of theoretical knowledge and skills acquired from experience (Miller, 2016, p. 185). Clearly, the field of medical diagnostics presents a complicated landscape.

It is common knowledge that the diagnosis of diseases stands at the most crucial juncture in patient care. As such, the problem of misdiagnosis remains one of the biggest issues for national healthcare systems all over the world. For instance, according to the data of the Chinese Medical Association, the number of persons misdiagnosed in clinical medicine in China is about 57 million people per year, with a total misdiagnosis rate of 27.8% and an ectopic misdiagnosis rate of 60% (White Paper 2018). Similar statistics may be found in other countries. [1]

Consequently, the biggest strategic transformation that is at the heart of contemporary electronic medicine is a shift from treatment medicine to prevention medicine (Venot, Burgun, & Quantin, 2014). This shift is influenced by the implementation of new informational technologies that shape healthcare practice on every level of its operation. However, the implementation of new technologies in the medical domain always leads to complex ethical dilemmas and moral concerns. Many of these issues are extremely novel and are still waiting for a careful ethical examination. This study aims to produce an ethical assessment of the use of the AI intelligent decision-support system (IDSS) in diagnostics.

The purpose of this article is twofold. First, I aim to show that the IDSS may be considered as a moral agent in medical diagnosis today. To develop this idea, I will introduce an approach to artificial agency that was provided by Luciano Floridi. Simultaneously, I will focus attention on the debate between the “Computational Modelers” and a group called “Computers-in-Society.” By doing so, I hope to demonstrate why the IDSS may be considered a moral agent instead of an exclusively moral entity (as Computers-in-Society suggests). I will also introduce Floridi’s definition of artificial agency, which consists of three criteria, namely, autonomy, adaptability, and interactivity. I will show how the IDSS possesses all three criteria. Secondly, I will provide a conceptual framework for the ethical evaluation of the moral impact that the IDSS has on the doctor’s decision-making and, consequently, on the patient’s wellbeing. To do this, I will apply the Object-Oriented Model of Moral Action (OOP model) to the domain of medical diagnosis. This model was developed by the Italian ethicist Luciano Floridi in his The Ethics of Information (2013) yet has never been implemented in the field of medical technologies. A significant part of the Floridi framework—and this is the point that binds together the two purposes of this paper— is that it may facilitate an ethical evaluation of artificial moral agency. Said differently, the model can on the one hand reveal particular cases where the IDSS exhibits autonomous or interactive behavior, while on the other, it can specify when this kind of system behavior may be ethically dangerous. This point is critical given that the contemporary moral domain still lacks adequate conceptual tools for tracing the changing moral nature of artificial agents that operate in different social domains.

The term “IDSS” in this study will refer to a class of algorithms that address the recommendation problem using a content-based or collaborative filtering approach, or a combination thereof. (Milano et al., 2020) With the focus on the algorithmic nature of the IDSS, I want to emphasize the “learning” ability of the computer system. The latter is vital to defining artificial moral agency.

Even though the moral character of AI in medicine is a significant part of contemporary ethical debates (Lynn, 2019; Morley et al., 2019; Nadin, 2020; Pesapane et al., 2018; Venot et al., 2014), the IDSS tool still lacks a more profound philosophical and moral elaboration. Relatively little work has been done on understanding the basic ethical features of the system, its long-term moral consequences, or its transformative effects for national health systems (NHSs) in general. This article aims to fulfill this gap and open up a scientific discussion of the moral properties of the IDSS toolkit. To accomplish this task, I will, on the one hand, introduce the major conceptual components of Luciano Floridi’s Information Ethics (IE), while on the other hand, I will discuss the technical architecture of the IDSS in close relation to the medical diagnostics domain with a further ethical elaboration of the system’s behavior and environment.

Luciano Floridi has made a considerable contribution to the field of contemporary information and computer ethics (Floridi, 2010; 2011; 2013; Demir, 2012). The scope of his elaboration includes a wide variety of novel philosophical ideas ranging from the methodology of levels of abstraction (Floridi, 2008) to the moral nature of AI (Floridi, 2012; Floridi et al., 2018). The main focus here will be on two articles by Luciano Floridi and Dominic Sanders, namely, “On the Morality of Artificial Agents” (Floridi & Sanders, 2004) and “Informational Structural Realism” (Floridi, 2008). Attention will also be given to a few important concepts in Luciano Floridi’s philosophy, namely, an object-oriented programming model of moral action (the OOP model), and the issue of artificial moral agency. Finally, it will be explained how these concepts may be beneficial to the ethical evaluation of the IDSS tool in diagnostics.

‘Computational modellers’ vs ‘Computers-in-Society’ – debates on the nature of the artificial moral agency

Contemporary information and computer ethics can be generally divided into two separate schools of thought: those who accept the idea of artificial moral agency and those who don’t. According to Johnson and K.W. Miller (2008), the first group claims that computational models represent reality, while the second group explores questions about the role of computer systems in decision-making by classifying computer systems as a form of technology, a new form with special features, but nevertheless a form of technology that is, by definition, created, and deployed by humans. The first group, called “Computational Modelers,” is represented by Luciano Floridi and Dominic Sanders, while the second (Computers-in-Society) is represented by Deborah Johnson (2006), Keith Miller (2016), and Thomas Powers (2013). The latter group insists that any computer agent can be a moral entity (from the point at which the system’s behavior can have a moral impact), but it can’t be a moral agent (since the system lacks mental states and “intendings” to act). According to Deborah Johnson:

The intentionality of computer systems means that they are closer to moral agents than is generally recognized. This does not make them moral agents because they do not have mental states and intendings to act, but it means that they are far from neutral. Another way of putting this is to say that computers are closer to being moral agents than are natural objects. Because computer systems are intentionally created and used forms of intentionality and efficacy, they are moral entities (Johnson, 2006, p. 202).

The act of defining computer systems as moral entities instead of moral agents created a strong dividing line between the two information and computer ethics groups. Computers-in-Society stands closer to the “standard” view of moral agency, which defines an agent as someone who is capable of having intentional mental states (desires, beliefs, and intentions) (Schlosser, 2019). Thus, Deborah Johnson clearly defines another four “standard” specifications (besides intentional mental states) required for agency. These are embodied action resulting from intentional mental states, action’s directedness toward the world, rational behavior, the effect of this behavior on the ethical patient (Johnson, 2006, p. 198). In what follows, Johnson argues that current computer systems meet all the requirements mentioned above, except the ability to have mental states. Consequently,—as Johnson, together with the other supporters of the “standard” approach claims—the absence of mental states indisputably makes computer systems irrelevant to contemporary discussions.

The Computational Modelers, though, claim that artificial moral agency is possible. Despite computer programs lacking mental states, the Computer Modelers insist they can still be defined as moral agents. Thus, following Floridi:

An autonomous agent is an agent that has some kind of control over its states and actions, senses its environment, responds to changes that occur within it and interacts with it, over time, in pursuit of its own goals, without the direct intervention of other agents (Floridi, 2013, p. 188).

It is worth emphasizing here that the aforementioned definition of autonomy is close to the engineering description. Recall that in terms of engineering vocabulary, autonomous agents are artificial entities that fulfill a certain, often quite narrow purpose, by moving autonomously through a “space” and acting in it without human supervision (Matthias, 2004). Nevertheless, it would be an obvious oversimplification to reduce Floridi’s view of artificial agency to the engineering approach. In what follows, I will show that Floridi provides a more nuanced description of the problem once he takes into account a wider variety of criteria omitted by supporters of the engineering approach (for instance, a philosophical analysis of the moral implications that artificial agency may have for future generations). All this together makes Floridi’s approach sustainable and novel on the one hand, while distinguishing his position from the abovementioned lines of thinking (e.g. Computers-in-Society and the engineering approach).

However, it must be noted that Floridi’s definition of autonomy must also be distinguished from the definitions found in the field of artificial life. Although the term “autonomy” is regularly used in artificial life to characterize self-organizing systems (Froese, Virgo, & Izquierdo, 2007), this approach may be helpful for understanding the limits and scope of artificial agency in general. Broadly speaking, there are two classes of approaches to the autonomy issues here: behavioral and constitutive autonomy. The behavioral approach insists that autonomy may be defined by the agent’s capacity for stable and/or flexible interaction with their environment, while the constitutive approach claims that autonomy in living systems is a feature of self-production (Ibid). As the autonomy of living systems exceeds the scope of this paper, one brief but significant point will be noted. All the aforementioned approaches to artificial agency, ranging from Computers-in-Society to other approaches to the field of artificial life, agree on the fact that current computer systems possess some level of independence (autonomy) from their designers. Their major disagreement concerns, on the one hand, the proper definition of this independence, and on the other hand, a proper specification of other essential features of artificial agency. [2]

Let me now come back to Floridi’s definition of artificial agency and situate it in the aforementioned landscape. Needless to say that accomplishing specific tasks without the direct intervention of designers requires not only autonomy but also adaptability and interactivity. Interactivity means that the agent and its environment (can) act upon each other, while adaptability means that the agent’s interactions (can) transform the transition rules by which it changes state (Floridi, 2013) [3]. Computational Modelers suggest that these three characteristics create a general framework for a proper definition of artificial moral agency. Now let’s take a look at how the IDSS toolkit contains all three components of this kind of agency.

Given that the notion of agency includes not only the characteristics of the agent itself but also embraces elements of its environment, it is useful to first define the technical architecture of the IDSS in relation to its environmental components. According to Randolph A. Miller, the implementation of the IDSS includes not only the technical elements of the system,

but also the user and the healthcare environment in which the user practices. A model of all of the possible influences on the evaluation outcomes would include

  1. [I]DSS-related factors (knowledge-base inadequacies, inadequate synonyms within vocabularies, faulty algorithms, etc.),

  2. user-related factors (lack of training or experience with the system, failure to use or understand certain system functions, lack of medical knowledge or clinical expertise, etc.)

  3. external variables (lack of available gold standards, failure of patients or clinicians to follow-up during study period).

Additionally, in any [I]DSS evaluation, the user’s ability to generate meaningful input into the system, and the system’s ability to respond to variable quality of input from different users, is an important concern (Miller et al., 2016, p. 198).

The model provided by Miller may serve as a good point of departure for describing the IDSS environment. The user-related factors and external variables will be described in subsequent sections of this paper, so here I will address the IDSS-related factors.

A significant component of the IDSS architecture is found in machine learning algorithms. This paper is premised on the idea that ‘deep learning’ algorithms represent a system’s capacity for autonomous behavior and its adaptability skills. As such, the following section takes a closer look at this part of the IDSS technological architecture. It is widely agreed that the rapid explosion of neural networks and ‘deep learning’ technology have significantly influenced human understanding of artificial moral agency. The “game-changing” role of this technology lies in the system’s ability to change its behavior over time. Put in Computer-Modeler vocabulary, this ability may be interpreted as the ability to change the state without a direct response to interaction (hence, to be autonomous) (Floridi & Sanders, 2004). In fact, as Andrew Spooner suggests:

[Another] appealing feature of neural networks—and what separates this technique from other methods of discovering relationships among data, like logistic regression—is the ability of the system to learn over time. A neural network changes its behavior based on previous patterns. In a domain where the relationship between findings and diseases might change, like infectious disease surveillance, this changing behavior can be desirable (Spooner, 2007, p. 37).

In light of the above, it becomes evident that a system’s learning ability creates a new ethical disposition where the IDSS may possess the character of a moral agent. The results of such a learning activity lead to a definite moral impact on the patient’s life.

Interactivity as the ability to act upon an agent’s environment is the final building block in Floridi’s definition of artificial moral agency. Following Floridi:

[A]ny action, whether morally loaded or not, as having the logical structure of a variably interactive process relating one or more sources or senders—depending on whether one is working within a multi-agent context—with one or more destinations or receivers (Floridi, 2013, p. 61).

The IDSS possesses this kind of feature in that it can have a strong moral impact on its environmental components. This part of the discussion may be labeled ‘human-machine interaction’ and will be examined in greater depth in the last section of this paper.

Moral action as a dynamic system – Luciano Floridi’s object-oriented programming model of a moral action (OOP model)

One of the most significant ideas born in the field of contemporary IE comes from fresh insights into the epistemic component of every ethical action. To put it simply, without information, there is no moral action (Floridi, 2006). In other words, to act morally, every agent needs information. The quality, amount, and accuracy of the information in question influences the agent’s decision-making and, hence, transforms its moral behavior. The principal concern of this line of thinking can be summarized in the following formulation:

[We] must also acknowledge the fact that even a good will acts in the dim light of uncertainty and that, as human beings, we shall always lack full ethical competence. This is why our first duty is epistemic: whenever possible, we must try to understand before acting (Floridi, 2013, p. 70).

This suggestion is vital to what is known as “long-term technical implementation,” which has a moral impact not only on the individual but also on groups, institutions, countries, and nations. The other important aspect here is the idea of uncertainty, which represents a significant ethical component of every moral evaluation. In the case of AI systems in diagnostics, the main aim is to reduce the level of uncertainty and improve diagnostic accuracy. However, in many cases, instead of a precise prediction, the system may cause new “uncertainty” issues. This point will be revisited in the discussion of the moral properties of the IDSS in the last part of the article.

Contemporary IE proposes new perspectives for understanding the nature of moral action. [4] These perspectives are usually inspired by insights from the field of computer science, programming, and mathematics. It may seem odd how a collaboration between supposedly distinct fields such as computer science, and ethics may be relevant to solving moral problems. However, contemporary IE offers us fruitful examples of such a state of affairs. The object-oriented programming model of moral action (OOP), which is considered in this study, is one of them. Luciano Floridi first introduced the model in his monograph Ethics of Information (Floridi, 2013). This paper places the OOP at the core of its argument and claims that this model provides contemporary ethics with a new moral framework, giving us insight into the structure and dynamics of moral action. Moreover, as Floridi maintains,“[t]he model is also useful to explain why any technology that radically modifies the ‘life of information’ is going to have profound implications for any moral agent” (2013, p. 20).

The roots of the OOP model, as may be clear to the reader already familiar with object-oriented programming methodology, have their origins in computer programming languages. Floridi uses this computer background as the backbone of his approach. Floridi defines the basic structure of his model as follows:

The first task is to analyse a moral action as a dynamic system arising out of the combination of seven principal components: (1) the agent, (2) the patient, (3) the interactions between the agent and the patient, (4) the agent’s general frame of information, (5) the factual information concerning the situation insofar as it is at least partly available to the agent, (6) the general environment in which the agent and the patient are located, and (7) the specific situation in which the interaction occurs. (Floridi, 2013, p. 103)

It is clear that every moral action consists of many components. Each component may present a different level of significance depending on the concrete situation, its environment, and other additional elements. As a result, the whole dynamic of moral action can be understood as the interconnection between these seven parameters. That is why the OOP model is useful in the ethical analysis of specific practical issues.

The OOP model and an ethical cartography of the IDSS

The preceding section defined a general matrix of moral action as a dynamic structure in terms of the OOP model. However, the model doesn’t answer the question of the morality of the action itself. In other words, what makes the action morally acceptable or, conversely, morally wrong in terms of IE? To provide a reasonable answer to this question, it will be helpful to formulate a simple definition of a morally qualifiable action. An action can be qualified as moral if it is seen to cause moral good or moral evil. [5] In other words, the characterization of the moral action may be formulated as follows: this is the sort of action that may cause a change in P’s (patient’s) state. This simple definition will serve as a point of departure for further discussion of the moral theory of action in IE. Following Floridi, “only actions, and not entities in themselves, can be qualified as primarily evil” because, as has been made clear above, “evil exists not absolutely, per se, but in terms of damaging actions and damaged patients” (Floridi, 2013, p. 184). It can be agreed that when one talks about good or evil (moral or immoral actions), one should take into account the nature of the action (whether it is beneficial or damaging) and the condition of the patient. The latter can be easily defined in terms of entropy. If the action increases the patient’s level of entropy, that sort of action may be measured as an evil (morally wrong) action, and vice versa.

In this article, the notion of entropy is introduced within a medical domain. Here, the term ‘entropy’ refers to disease as a level (state) of dynamic damage and destruction of the human body (e.g. as a measure of chaos). [6] Disease is taken to mean entropy with a tendency for further magnification. The dynamic nature of such an understanding of entropy (disease) shouldn’t be overlooked. This magnification may be fast (as in some oncological diseases) or slow (when the patient may suffer chronic disease without dramatic visible regress). Nevertheless, the central role of medical diagnosis is to define the disease, its nature, and type, and to start a treatment that will decrease or remove the entropy from the system (human body).

Moving on, I will define the most morally sensitive zones where the system’s behavior is crucial for a correct diagnosis and, hence, where the system may have the most significant moral impact. Adequately defining these “risk areas” is useful for developing, implementing, and using other IDSS technologies in national healthcare systems. Furthermore, the ethical cartography presented by means of the OOP may serve as a general roadmap for avoiding ethical issues in the future as well as solving real-time moral dilemmas. With an understanding of this ethical framework in place, it becomes easier to examine how the IDSS has a moral impact on the ethical situation.

First & Second parameters: agent = doctor; patient = medical patient

In terms of the OOP model, the moral agent is one who initiates a given action. As touched upon above, in Floridi’s approach an agent extends the ‘standard’ definition of agency (e.g. as someone possessing mental representations and intentions) which could be defined by the combination of three essential features—autonomy, adaptability, interactivity. To put this notion in causal terms, the agent represents the point of departure of a moral deed, the entrance point of action in the world. The patient, in turn, is someone who is affected by the activity of the moral agent. In the specific case of medical diagnosis using IDSS, we have two moral agents (the doctor and the IDSS) and one moral patient (the medical patient). It is worth mentioning from the outset that the bivalent model of the process is too simplistic. Usually agents and patients are involved in interactive relations where they can mutually and intensively affect one another.

Third parameter: moral action as the interaction between the doctor and the patient

The third point of the matrix is the interaction between the agent and the patient. This relationship is significant because it stands for the moral action itself. Given that this article is not concerned with morally neutral interactions, how can we describe the moral action inside the OOP model? Floridi gives the following definition:

Moral action itself can now be modelled as an information process, i.e. as a series of messages (M), initiated by A, that brings about a transformation of states directly (more on this qualification shortly) affecting P, which may variously respond to M with changes and/or other messages, depending on how M is interpreted by P’s methods (Floridi, 2013, p. 106).

As has been noted above, IE describes the moral activity as an information process. Information plays a significant role in every stage of the moral situation. For instance, every moral action’s quality (and consequence) depends on how well-informed the patient is, what quality of information (s)he received, and how thoroughly (s)he has processed it. The moral action (the message) is also an information structure directed to the patient who, in turn, processes this information through her general frame of information to verify the best line of subsequent action. Fortunately, this structure can be generally divided into two main components: information and processing (how this information has been “read”). [7]

Fourth parameter: the doctor’s general frame of the information

According to Floridi:

The fourth component is the personal or subjective frame of information within which the agent operates. […] It is constituted by internally dynamic and interactive records (modules) of [agent]’s moral values, prejudices, past patterns of behaviour, attitudes, likes and dislikes, phobias, emotional inclinations, moral beliefs acquired through education, past ethical evaluations, memories of moral experiences (Floridi, 2013, p.106).

Following this definition, one may suggest that the accuracy of current medical diagnosis may rely not only on the technical component of the diagnosis but also on the individual background of the doctor who is performing the specific medical task. As Goodman put it:

One way to abuse a tool is to use it for purposes for which it is not intended. Another is to use a tool without adequate training. A third way is to use a tool incorrectly (carelessly, sloppily, etc.) independently of other shortcomings (Goodman, 2007, p. 136).

All the three above-mentioned components may be included in the fourth parameter of the model and relate to the doctor’s general frame of information. Roughly speaking, this part of the OOP model could be helpful in the evaluation of specific professional skills and competence which may have moral consequences for diagnosis.

Fifth parameter: the doctor’s factual information concerning the situation

Perhaps the most significant question related to a moral evaluation of the IDSS is how the system’s behavior influences the doctor’s decision-making. This paper’s analysis is premised on the idea that the transformation inside the doctor’s factual information concerning the situation (the information about the patient’s disease) is the most significant component of ethical dynamics within the diagnostics process. As was touched upon earlier, in order to act morally, every agent needs a large amount of information—the quality, accuracy, and quantity of this information influence the agent’s moral behavior. In the case of diagnostic decision-making, the moral stakes are extremely high. The doctor’s proper application of the information set may lead to “Life or Death” consequences. A quick example may help make the point.

Assume, for the sake of argument, that the doctor is undertaking the final stages of the diagnostic process, wherein a decision about a patient’s diagnosis and further treatment must be made. Steps such as the patient interview, the clinical examination, and the analysis of lab exams have already been completed. In this case, the doctor will use the IDSS for the final quantification of the solution. If the IDSS is taken to be a moral agent (as it is by Floridi), it is of course reasonable to imagine that the system, through its actions, will influence the doctor’s decision-making process. This happens because the system’s behavior (#1 in the OOP model) provides the doctor with information about the patient’s disease (#3) and, in this way, changes what factual information the doctor has concerning the situation at hand (#5) and, hence, the doctor’s decision-making. As a result, the doctor, as a moral agent, appears to be a moral patient.

Two important points must be clarified here. First, it has already been shown that the IDSS falls under Floridi’s definition of moral agency (e.g. possesses autonomy, adaptability, and interactivity). In this particular case, the IDSS differs from all other medical technologies as soon as its behavior includes features of agency and represents a more sophisticated behavioral system than other medical devices used for gathering the information. Secondly, the final medical decision is the result of collaboration between the doctor and the system. The collaborative nature of the decision-making is an integral part of current ethical debates and, in terms of this paper, begs for a more comprehensive elucidation.

According to Epstein (2015), collaborative decisions in diagnosis question the nature of the decision-making in general and necessarily involve ‘collaborative intelligence.’ The latter represents a hybrid relation between human and machine where the responsibilities for the final medical decisions are shared between the doctor and the computer system. Although ‘collaborative intelligence’ provides NHSs with insights and long-term benefits, it may still lead to ethical pitfalls and moral hazards. One of the possible ‘risk zones’ can be found in the way human and computer systems interact. Once computer systems are capable of providing the doctor with specific information about the patient’s health and perform a sophisticated interpretation of the medical data, it becomes crucial to analyze the mode of ‘doctor-machine’ interaction. This last point was clarified in contemporary postphenomenology under the name of ‘material hermeneutics’ (Ihde, 2017; Verbeek, 2005). The latter represents a useful conceptual tool for evaluating the ‘human-technology’ relation. Considering this, I will include a short postphenomenological investigation in the next part of the paper. But first another empirical example from the contemporary medical domain should be introduced.

I have already mentioned that artificial neural networks, together with ‘deep learning’ algorithms, form a significant part of informational medicine. A fitting example is found in the field of so-called ‘convolutional neural networks’ in which we can find some of the most prominent image recognition technologies used in the diagnostics domain. The principle function of convolutional neural networks is to find patterns of similarity in medical images. In many cases properly trained algorithms may find patterns of disease more efficiently than a doctor would. The artificial network ‘interprets’ the patient’s image and provides the doctor with a ‘readable’ result of its interpretational function. Contemporary American philosopher and founder of the ‘post-phenomenological’ movement Don Ihde refers to this process as ‘material hermeneutics.’ He writes:

Returning to the image illustrations, in each case the new image produces a perceivable, ‘readable’ result. The emission patterns, with intensities and shapes, are now translated by the instrument into bodily perceivable images, perceived and ‘read’ by the observer-scientist. What I am calling translation is a technological transformation of a phenomenon into a readable image. [my emphasis] (Ihde, 2017, p. 56).

This ‘technological translation’ is a significant part of the contemporary diagnostic domain. Current image recognition systems not only process the data, they also re-interpret and translate it into a readable image. Consequently, the final medical decision emerges on the crossroads of two ‘hermeneutic’ activities: technological and subjective.

Sixth parameter: the action’s environment

As mentioned above, the structure of the computer system’s environment consists of the following components: IDSS-related factors, user-related factors, and external variables. In terms of this paper, the external variables correspond to a specific socio-technical context, consisting of social actors (people, institutions, organizations, business companies, etc.) and other technological entities (Morley et al, 2019). The notion of environment takes us from the “individual” level of moral relation to a more general level (institutional or societal). This parameter might consist of such components as the general efficiency of the NHS, law and policy regulations, and demographics of the specific country.

Broadly speaking, the external variables of the action’s environment may be specified from backward and forward-looking perspectives. The backward perspective includes components that correspond to development and early implementation of the IDSS in the medical domain (so-called ‘technology-in-making’ components). In this context, the moral evaluation must be focused on how the system was developed, tested, and integrated into the specific hospital environment.

A forward-looking perspective gives us a slightly different angle for the moral evaluation of the IDSS. This part of the evaluation process deals with the possible short-term and longterm moral consequences that may arise after the IDSS is implemented on the societal level (so-called, ‘technology-in-use’ components). One of the potentially useful approaches here (among many others) is one that defines IDSS in terms of multi-stakeholder environments (Abdollahpouri et al., 2017). This approach may provide the ethicist with fruitful insights, considering that the structure of multiple parties (including users, providers, and system administrators) can derive different utilities from decision-support process. (Milano et al., 2020)

Seventh parameter – the specific situation in which the interaction occurs = medical diagnosis

The medical diagnostic process is complex. Given the complexity of contemporary diagnostics, it is important to note what Marcum (2008) has described as the “uncertainty issue.” Marcum writes that “The causal relationship is not generally a simple linear relationship between cause and effect. That relationship is often complex and multifaceted. Sufficiency and even necessity in terms of disease causation are generally only partial.” In other words, “we never have a full causal network or tree, but only a partial one” (Marcum, 2008, p. 36). Accordingly, the doctor will always be working in a situation of uncertainty, one which can never be totally reduced. Irreducible uncertainty is an unavoidable ontological presupposition rooted in the structure of the human body, and it seems to be irremovable in general.

Furthermore, the problem of diagnostic uncertainty has an interesting correlation with the uncertainty of the computer system. The latter represents a type of system behavior that is tricky to explain and even harder to predict. Before the rapid explosion of artificial neural networks, the programmer was usually a coder, who writes the program from the very beginning to the very end. All the changes that he makes are visible to him because he codes line-by-line employing this linguistic representation. At this stage, the program may be inspected and is transparent to its creator (the programmer) (Matthias, 2004. In artificial neural networks, however, linguistic representation is changed by a matrix of synoptic weights, which lack symbolic clarity and can’t be interpreted in a precise fashion. As such, contemporary neural networks are non-interpretable compared to the older programs that take us to so-called “algorithmic black-box” issues.

The ‘uncertainty issue’ in contemporary diagnosis includes, however, one more dimension, namely the ‘body complexity’ issue. This issue embraces so-called medical “abnormalities,” cases where the standard symptomatic matrix doesn’t work and the roots of the disease appear to be extremely vague. Interestingly enough, in these cases IDSS may be more useful than elsewhere. Among the most significant benefits of the system is its ability to uncover hidden variables inside the medical data, which is a crucial feature in the diagnosis of rare diseases and bodily abnormalities. This feature is also important because it significantly expands the doctor’s cognitive capabilities.

Consequently, the ‘uncertainty issue’ consists of three levels. The first level presents issues concerning the diagnosis process itself. The second level relates to “algorithmic black-box” issues and to all problems associated with the technical complexities of contemporary computer systems. The third level refers to ‘body complexity’ problems, those which arise from the sophistication of the human body’s organic processes. Consequently, when these three levels combine with each other, the risk of misdiagnosis increases.

Conclusion

This article aimed to provide a general ethical framework for the analysis of the IDSS tool in different techno-social environments. The preceding analysis has demonstrated that technologically mediated diagnostics represent a dynamic moral situation composed of diverse participants and components. Special attention was paid to the question of the artificial agency of the IDSS. As noted above, the landscape of current debates on the nature of the artificial agency is changeable; it includes different approaches, schools of thought, and conceptual frameworks. This paper was premised on the idea that a definition of artificial agency, the one brought into play by Luciano Floridi, may be valuable in ethical evaluations of the IDSS within the medical domain. This approach, together with the OOP model, offers a strong conceptual framework for the moral analysis of complex computer systems. However, it should be ultimately noted that this approach does leave some knots untied.

Of special importance in this article was the move to define the components of moral interactions which may have potentially dangerous outcomes, and to show how the IDSS might influence these outcomes. The most ethically dangerous interaction in the OOP model is represented by component #5, namely an account of the doctor’s factual information concerning the situation. This raises the issue of the epistemological component of moral action, which claims that to act morally an ethical agent needs an accurate information framework for accurate decision-making. In the case of medical diagnosis, this element becomes important as soon as the IDSS begins influencing the general structure of the doctor’s informational frame of reference used in the diagnostics procedure. Of course, other elements of the OOP model are no less significant. In fact, every element of the system, depending on the specific socio-technical context, may have a vital ethical impact on the patient’s wellbeing.

The preceding analysis makes clear that the doctor’s role in technologically mediated diagnosis isn’t fixed. It shifts from being agent of the moral action to that of patient. The ‘agent’ component of the doctor’s activity exists because the doctor engages in diagnostic decision-making. However, during the diagnostics process (when the diagnosis is not yet concluded), the doctor is greatly influenced by the IDSS. In such a situation he or she appears to be a patient (not in a medical sense, but in IE vocabulary) of the system’s behavior. Consequently, the final decision is a result of this ‘doctor-computer’ interaction. The nature of this final decision is first and foremost collaborative.

This insight is an important finding of this study. As soon as the IDSS exerts a specific sort of autonomy and can influence the doctor’s decision-making, it becomes apparent that in such a multi-agent system (the IDSS, doctor, patient), the moral dynamic moves in a different direction and intensity than in a two-agent system (doctor-patient). This finding offers a vital point for a new round of discussion on the issues of moral agency, artificial agents, distributed morality, artificial evil, and ethical entropy, to name but a few.



Acknowledgements

The work on this paper has been supported financially by the Major Project of the National Social Science Fund of China: “Studies on One Hundred Years of Western Metaethics” (grant number: 19ZDA036)

References

Abdollahpouri, H., Burke, R., & Mobasher, B. (2017). Recommender systems as multistakeholder Environments.10.1145/3079628.3079657Search in Google Scholar

Capurro, R. (2008). On Floridi’s metaphysical foundation of information ecology. Ethics and Information Technology, 10, 167–173.10.1007/s10676-008-9162-xSearch in Google Scholar

Demir, H. (Ed.) (2012). Luciano Floridi’s Philosophy of Technology. Dordrecht: Springer Netherlands.10.1007/978-94-007-4292-5Search in Google Scholar

Epstein, S. (2015). Wanted: Collaborative intelligence. Artificial Intelligence, 221, 36–45.10.1016/j.artint.2014.12.006Search in Google Scholar

Ess, C. (2008). Luciano Floridi’s philosophy of information and information ethics: Critical reflections and the state of art. Ethics and Information Technology, 10, 89–96.10.1007/s10676-008-9172-8Search in Google Scholar

Floridi L., & Sanders, J.W. (2004). On the morality of artificial agents. Minds and Machines, 14, 349–379.10.1023/B:MIND.0000035461.63578.9dSearch in Google Scholar

Floridi, L. (2006). Information ethics, its nature and scope. ACM SIGCAS Computers and Society, 36(3), 21–36.10.1145/1195716.1195719Search in Google Scholar

Floridi, L. (2008a). A defence of informational structural realism. Synthese, 161, 219–253.10.1007/s11229-007-9163-zSearch in Google Scholar

Floridi, L. (2008b). The method of levels of abstraction. Minds and machines, 18, 303–329.10.1007/s11023-008-9113-7Search in Google Scholar

Floridi, L. (2010a). Ethics after the information revolution. In L. Floridi (Ed.), The Cambridge handbook of information and computer ethics (pp. 3–20). Cambridge: Cambridge University Press.10.1017/CBO9780511845239.002Search in Google Scholar

Floridi, L. (2010b). Information: A very short introduction. New York: Oxford University Press.10.1093/actrade/9780199551378.001.0001Search in Google Scholar

Floridi, L. (2011). The philosophy of information. Oxford: Oxford University Press.10.1002/9781444396836.ch10Search in Google Scholar

Floridi, L. (2012). Big data and their epistemological challenge. Philosophy & Technology, 25, 435–437.10.1007/s13347-012-0093-4Search in Google Scholar

Floridi, L. (2013). The Ethics of information. Oxford: Oxford University Press.10.1093/acprof:oso/9780199641321.001.0001Search in Google Scholar

Floridi, L., Cowls, J., Beltrametti, M,. Chatila, R,. & Chazerand, P. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689–707.10.1007/s11023-018-9482-5Search in Google Scholar

Froese, T., Virgo, N., & Izquierdo, E. (2007). Autonomy: A review and a reappraisal. In F. Almeida e Costa, L.M. Rocha, E. Costa, I. Harvey, & A. Coutinho (Eds.), Advances in Artificial Life. ECAL 2007. Lecture Notes in Computer Science, vol. 4648. Berlin, Heidelberg: Springer.Search in Google Scholar

Goodman, K.W. (2007). Ethical and legal issues in decision support. In E.S. Berner (Ed.), Clinical Decision Support Systems, Health Informatics (pp. 131–147). Bern: Springer International Publishing Switzerland.10.1007/978-3-319-31913-1_8Search in Google Scholar

Ihde, D. (2017). Postphenomenology and technoscience: The Peking University Lectures. New York: State University of New York Press.Search in Google Scholar

Johnson, D. (2006). Computer systems: Moral entities, but not moral agents. Ethics and Information Technology, 8(4), 195–204.10.1007/s10676-006-9111-5Search in Google Scholar

Johnson, D., & Miller, K.W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10(2–3), 123–133.10.1007/s10676-008-9174-6Search in Google Scholar

Lynn, L. A. (2019). Artificial intelligence systems for complex decision-making in acute care medicine: A review. Patient Safety in Surgery, 13(6), 1–8.10.1186/s13037-019-0188-2Search in Google Scholar

Marcum, J. A. (2008). Humanizing modern medicine. An introductory philosophy of medicine. Dordrecht: Springer.10.1007/978-1-4020-6797-6Search in Google Scholar

Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183.10.1007/s10676-004-3422-1Search in Google Scholar

Milano, S., Taddeo, M., & Floridi, L. (2020). Recommender systems and their ethical challenges. AI & Society, 35, 957–967.10.1007/s00146-020-00950-ySearch in Google Scholar

Miller, K. W., Wolf, M., & Grodzinsky, F. (2016). This ‘‘ethical trap’’ is for roboticists, not robots: On the issue of artificial agent ethical decision-making. Science and Engineering Ethics, 23(2), 389–401.10.1007/s11948-016-9785-ySearch in Google Scholar

Miller, R. A. (2016). Diagnostic decision support systems. In E.S. Berner (Ed.), Clinical Decision Support Systems, Health Informatic (pp. 181–209). Bern: Springer International Publishing Switzerland.10.1007/978-3-319-31913-1_11Search in Google Scholar

Morley, J., Caio, C., Machado, V., Burr, C., Cowls, J., Joshi, I., Taddeo, M., & Floridi, L. (2019). The debate on the ethics of AI in health care: A reconstruction and critical review. SSRN, 1–35.10.2139/ssrn.3486518Search in Google Scholar

Nadin, M. (2020). Aiming AI at a moving target: Health (or disease). AI & Society, 35, 1–9.10.1007/s00146-020-00943-xSearch in Google Scholar

Pesapane, F., Codari, M., & Sardanelli, F. (2018). Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. European Radiology Experimental, 2(35), 1–10.10.1186/s41747-018-0061-6Search in Google Scholar

Popa, E. (2020). Artificial life and ‘nature’s purposes’: The question of behavioral autonomy. Human Affairs, 30(4), 587–596.10.1515/humaff-2020-0052Search in Google Scholar

Powers, T. (2013). On the moral agency of computers. Topoi, 32(2), 227–236.10.1007/s11245-012-9149-4Search in Google Scholar

Schlosser, M. (2019). Agency. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2019 Edition). https://plato.stanford.edu/archives/win2019/entries/agency/Search in Google Scholar

Singh, H., Meyer, A. & Thomas, E. (2014). The frequency of diagnostic errors in outpatient care: Estimations from three large observational studies involving US adult populations. BMJ Quality & Safety, 23(9), 727–31.10.1136/bmjqs-2013-002627Search in Google Scholar

Spooner, A. S. (2007). Mathematical foundations of decision support systems. In E. S. Berner, Clinical Decision Support Systems. Theory and Practice (pp. 19–45). New York: Springer Verlag.10.1007/978-3-319-31913-1_2Search in Google Scholar

Venot, A., Burgun, A., & Quantin, C. (2014). Medical informatics, e-health, fundamentals and applications. Paris: Springer-Verlag France.10.1007/978-2-8178-0478-1Search in Google Scholar

Verbeek, P. P., (2005). What things do. Philosophical reflections on technology, agency and design. University Park: Pennsylvania State University Press.10.1515/9780271033228Search in Google Scholar

White Paper of AI Healthcare technology and application in 2018. (2018). 医疗人工智能技术与应用白皮书). Internet Healthcare Industry Alliance.Search in Google Scholar

Published Online: 2021-04-22
Published in Print: 2021-04-27

© 2021 Institute for Research in Social Communication, Slovak Academy of Sciences

Downloaded on 8.6.2024 from https://www.degruyter.com/document/doi/10.1515/humaff-2021-0013/html
Scroll to top button