Introduction

In the last decades increasing attention is paid to the topic of responsibility in technology development and engineering.Footnote 1 The topic is often raised in the context of disasters due to technological failure, such as the Bhopal disaster (Castleman and Purkavastha 1985; Bisarya and Puri 2005), the explosion of the Challenger (Vaughan 1996; Davis 1998; Harris et al. 2005), and the sinking of the Herald of Free Enterprise (Richardson and Curwen 1995; Berry 2006). The discussion of responsibility then typically focuses on questions related to liability and blameworthiness.Footnote 2 Asking these questions might suggest that there is one, unambiguous definition of responsibility. This is far from true, however. In moral philosophy, few concepts are more slippery than that of responsibility (Miller 2001, p. 455). What the questions of liability and blameworthiness share, is that the question of responsibility is asked after some undesirable event has occurred. However, the ascription of responsibility can also refer to something that ought to happen in the future: being responsible then means that an agent has been assigned a certain task or set of obligations to see to it that a certain state of affairs is brought about (or prevented). In that latter case, responsibility is often ascribed from a consequentialist perspective.Footnote 3 As a third approach one could also distinguish the question of responsibility from the perspective of the rights of potential victims, which often focuses on the question who should put a situation right (e.g., by compensating for certain damage).

Recent discussions in engineering ethics call for a reconsideration of the traditional quest for responsibility. Rather than on alleged wrongdoing and blaming, the focus should shift to more socially responsible engineering, in which “to maximize the service to the larger society” should become the ethical norm (Durbin 2008, p. 230). Responsibility as blameworthiness should therefore be replaced by, or complemented with the notion of engineering as a responsible practice (Pritchard 2001). Until the late 1990s, scholarly literature on engineering ethics, however, seemed to be biased towards the blame-oriented or merit-based perspective on responsibility rather than this more forward-looking perspective (Pritchard 2001, p. 391; Durbin 1997).

Similarly, both in the general field of moral philosophy, and more specifically in the field of engineering ethics, there has also been a call to shift the focus of ethics from an abstract outsider’s perspective towards the practice in which moral deliberation takes place. For example, in the general field of moral philosophy Alasdair MacIntyre and Michael Walzer argue for an insider’s perspective when trying to improve a practice.Footnote 4 In the field of engineering ethics, philosophers such as Michael Pritchard, Mike Martin, Vivian Weil and Michael Davis are firm proponents of taking an insider’s perspective on engineering and its ethical issues. Michael Davis, for instance argues that the discussion of responsibility is too much about ‘holding others responsible’ instead of ‘assuming responsibility’ (Davis 2009).

This shift from an outsider’s perspective towards an insider’s perspective might have implications for the topic of responsibility as well. The present paper aims at exploring three main approaches to responsibility in order to see which one is most appropriate to apply in engineering and technology development, where I take appropriateness to mean two things:

  1. (1)

    the approach should reflect people’s basic intuitions of when it is justified to ascribe responsibility to someone. An approach that contravenes these basic intuitions will probably be deemed unfair. Whether such an approach should depart from abstract principles and work top-down to considered judgments about particular cases, or depart from these considered judgments and work bottom-up to more general principles is still open for discussion. It is important, though, that people recognize that the responsibility ascription is justified.

  2. (2)

    the approach should inform the direction of technology development and therewith improve technological design. In order for this to be so, it should be possible to apply the approach to specific contextualized moral issues that are raised by specific technological and scientific developments rather than to more general abstract issues. This second requirement follows from recent discussions within engineering ethics, and ethics concerning New and Emerging Science and Technology (NEST) in particular, in which it is argued that the ethical and social aspects of new technologies should be addressed at an early stage of technology development in order to adapt technology to society’s needs (Van de Poel 2008; Swierstra and Rip 2007).Footnote 5

The outline of this paper is as follows. I will first discuss three different perspectives for ascribing responsibility: a merit-based perspective, a rights-based perspective and a consequentialist perspective. After a brief intermezzo on forward-looking and backward-looking responsibilities, I will apply the three perspectives to the example of the development of a new sewage water treatment technology. A comparison of the three approaches will show that the consequentialist perspective is especially suited for distributing responsibilities since it is most akin to the engineering work and it (therefore) offers the best opportunities for improving technological design. The paper ends with recommendations for further developing the field of engineering ethics by incorporating insights from political philosophy.

Three Perspectives for Ascribing Responsibility

In this section, I discuss three approaches or perspectives for ascribing responsibility: a merit-based perspective, a rights-based perspective and a consequentialist perspective.Footnote 6 Although the latter is common in non-philosophical discussions (for example in organizational and management literature), the philosophical literature is mainly focused on responsibility as blameworthiness (i.e., the merit-based perspective).Footnote 7 Being the most common approach in philosophical literature, I start the present overview with this merit-based perspective.

A Merit-Based Perspective on Responsibility

In the philosophical literature on moral responsibility, the aim for ascribing responsibility is mostly retributivist. In the traditional view, being morally responsible means that the person is an appropriate candidate for reactive attitudes, such as blame or praise (Strawson 1974; Fischer and Ravizza 1993; Miller 2004). Being morally responsible (i.e., being eligible for reactions of praise and blame) is not the same as being causally responsible. One can imagine a situation where a person did indeed causally contribute to certain outcome but is not eligible for moral evaluation, and hence not for reactive attitudes of praise or blame (e.g., in case of positive outcomes due to sheer luck, or negative outcomes which one could not reasonably avoid). In both cases it is not warranted to praise or blame the person for the outcome. Hence, since moral responsibility in this above elaborated view is related to reactive attitudes, which may have consequences for the well-being of an agent, the ascription of moral responsibility is only warranted if these reactive attitudes and their consequences are merited or deserved (see Zimmerman 1988; Wallace 1994; Watson 1996; Magill 2000; Eshleman 2008). This is usually translated into certain conditions that have to be met before it is fair to ascribe responsibility to someone. In the remainder, I call this the fairness criterion of responsibility ascriptions. Although academics disagree on the precise formulation, the following conditions together capture the general notion of when it is fair to hold an agent morally responsible for (the consequences of) their actions (see Feinberg 1970; Hart and Honoré 1985; Bovens 1998; Fischer and Ravizza 1998; Corlett 2006):

  1. 1.

    Moral agency: the responsible actor is an intentional agent concerning the action. This means that the agent must have adequate possession of his or her mental faculties at the moment of engaging in the action. Young children and people whose mental faculties are permanently or temporarily disturbed will not be (fully) held responsible for their behavior. However, to put oneself knowingly and voluntarily into a situation of limited mental capacity (by drinking alcohol or taking drugs for example) does not (in general) exempt one from being responsible for the consequences of one’s behavior. Some people phrase this condition in terms of intention, meaning that the action was guided by certain desires or beliefs.

  2. 2.

    Voluntariness or freedom: the action resulting in the outcome was voluntary, which means that the actor is not responsible for actions done under compulsion, external pressure or hindered by other circumstances outside the actor’s control. The person must be in the position to determine his own course of action (cf. condition 1), and to act according to that.

  3. 3.

    Knowledge of the consequences: the actor knew, or could have known, the outcome. Ignorance due to negligence, however, does exempt one from responsibility.

  4. 4.

    Causality: the action of the actor contributed causally to the outcome; in other words, there has to be a causal connection between the agent’s action or inaction and the damage done.

  5. 5.

    Transgression of a norm: the causally contributory action was faulty, which means that the actor in some way contravened a norm.

Note that especially the first two conditions are closely interrelated. Being an intentional agent means that one has the opportunity of putting the will into effect and that one is free from external pressure or compulsion (Thompson 1980; Lewis 1991; May and Hoffman 1991). With regard to the fifth condition, extensive debate has been going on as to what counts as a norm. In daily life the norm can be much vaguer than in criminal law where the norm must be explicitly formulated beforehand (the nullum crimen, nulla poena sine praevia lege poenali principle).Footnote 8

A Rights-Based Perspective on Responsibility: The No Harm Principle

A second approach for ascribing responsibilities within the field of science and technology is based on the individual right of people to be safeguarded from the consequences of another person’s actions (the so-called no harm principle). This implies that “actions are right if and only if: either there are no (possible) consequences for others; or those who will experience the (possible) consequences have consented to the actions after having been fully informed of the possible consequences” (Zandvoort 2005b, p. 46). The aim of this approach is remedial: it refers to the duty or obligation to put a situation right (Miller 2004). In practice this rights-based approach translates into two requirements for decision making regarding the development, production and use of technology (Zandvoort 2008). The first is the (legal) requirement of strict liability, which holds that actors are unconditionally required to repair or fully compensate for any damage to others that may result from their actions, regardless of culpability or fault (Honoré 1999; Van Velsen 2000; Vedder 2001; Zandvoort 2005a). Hence, the question of responsibility is reduced to the question ‘who caused the particular outcome’ (causal responsibility). As such, blame is not the guiding concept in ascribing responsibility.Footnote 9 The second requirement relates to the principle of informed consent, which holds that “for all activities that create risks for others, all who are subjected to the risks must have given their informed consent to the activities and the conditions under which the activities are performed” (Zandvoort 2008, p. 4).

Instead of fairness towards potential wrongdoers, this approach focuses on fairness towards potential victims. Given the importance of informed consent, the engineering ethics literature on this approach to responsibility therefore focuses on the conditions under which consent can be gained and its implications for, e.g., risk communication and risk assessment.

A Consequentialist Perspective on Responsibility

The third perspective for ascribing responsibility is the consequentialist perspective. In the consequentialist perspective, responsibility is ascribed for instrumental reasons rather than retributivist (merit-based) or remedial (rights-based) reasons. In the consequentialist perspective, the most important question when ascribing responsibility is not whether the reactive response triggered by the responsibility ascription is warranted but whether the reactive response would likely lead to a desired outcome, such as improved behavior by the agent (Eshleman 2008).Footnote 10 Where fairness is the main criterion for the merit-based perspective and informed consent the basis for the rights-based approach, efficacy is the criterion for consequentialist responsibility ascriptions, which means that they should contribute to the solution of the problem at hand (Nihlén Fahlquist 2006a, 2009). According to a strict consequentialist view, the responsibility ascription that yields the best consequences is the morally optimal responsibility ascription. Responsibilities, in this view, do not take specific actions of persons as their object but they rather have the character of obligations to see to it that a certain state of affairs is brought about (or prevented). As such responsibilities are outcome and result oriented (Van den Hoven 1998, p. 107).

In the case of engineering and technology development, this consequentialist perspective could be taken to imply that for a technology to be “right,” in the sense that it is from a societal point of view desirable or at least acceptable that the technology is being developed, potential implications for society (e.g., human health and the environment) should be taken into account during the design phase. In other words, for every potential implication, whether this is a risk or some other problematic issue, someone should be ascribed the responsibility to address this issue. This does not mean that all risks should be completely excluded—a requirement which is impossible to live up to—but that at least everything that can reasonably be known should be considered during design and development phase. Sometimes this might imply that, after deliberation, a potential risk will be accepted as is since the (societal) costs of preventing it do not outweigh the (societal) costs of accepting it.Footnote 11

The Three Perspectives Compared

In the overview presented above, a distinction was made between the goals that were aimed at in the different perspectives. In addition to a different aim, we could also say that the three approaches each depart from a particular moral background theory and that they each try to answer a different moral question.Footnote 12 The merit-based approach fits into a deontological framework, which is primarily a theory of “right actions.” The rights-based approach fits into an ethics of rights and freedoms (see, e.g., Nozick 1974; Mackie 1978). This theory shares with deontological ethics that it takes “action” as the primary object of evaluation. Where deontological ethics departs from duties, a right-based discourse departs from people’s individual rights and freedom and uses these to determine which actions are permissible and which are not. In both cases the content of the responsibility ascription is action that ought to be abstained from (merit-based) or that ought to be done (rights-based): to breach a duty is to perform a blameworthy action (merit-based) or to be liable for compensation (rights-based).

The consequentialist approach, which (unsurprisingly) fits best into some form of consequentialism, has a different focus. Rather than on particular action, the consequentialist approach is focused on states of affairs. It does not prescribe what action ought to be done but rather what should be achieved.

A summary of the three approaches is listed in Table 1.

Table 1 Perspectives for ascribing responsibility

Forward-Looking Versus Backward-Looking Responsibility

Before continuing the application of the three perspectives on a real engineering case, some clarifications regarding responsibility need to be made.

One could argue that the merit-based and the consequentialist perspective responsibility are not comparable in the sense that they refer to different time horizons. We therefore cannot speak of two perspectives on the same concept but we rather should speak of two different types of responsibility, each with a different criterion. For example, the merit-based perspective is often applied after-the-fact and it is therefore backward-looking or retrospective. The consequentialist perspective is often applied in a forward-looking or prospective sense (i.e., before something has happened). However, despite the difference in focus, the two perspectives are closely related. Imagine an engineer E who designs some artifact A. Unfortunately, there is a serious flaw in the design and the artifact causes the death of some innocent person P. Imagine further that E could have easily designed an artifact A* with similar (functional) characteristics but without the property leading to the death of P. In fact, E knew that the design was flawed and he intentionally did not improve the design, even though he had the freedom to do so. From this we would probably conclude that E is morally responsible for the death of P. But why is that so? As explained in the section “A Merit-Based Perspective on Responsibility,” this perspective involves a moral assessment of the agent in terms of the conditions discussed above. Except for the condition of causation, which determines whether someone did causally contribute to a certain outcome, the other four conditions bridge the gap between causal and moral responsibility. In the example, four conditions are obviously met: the engineer is a moral agent (condition 1), he was free (2), he knew of the consequences (3) and he causally contributed to the death (4). But what about the fifth condition: the transgression of a norm? Most people would probably say that E is blameworthy because he did not pay enough attention to the lethal consequences of the artifact. Apparently, the fifth condition in the merit-based perspective implies a forward-looking responsibility to be careful or to pay attention. Both in law and professional ethics this forward-looking responsibility is operationalized in the duty of (reasonable) care to avoid (foreseeable) harm to others. At the minimal level, this duty of care implies that E should consider how to redirect foreseeable harm to people who are affected by his artifact, but it could also be argued that he has the (broader) responsibility to look after potentially dangerous but as yet unforeseen risks. The duty of care implies that there are certain acts or omissions that should be avoided. In this simplified case, the duty of care requires that the engineer should not develop artifact A but rather A*. So also in a merit-based perspective, people have forward-looking responsibilities.

If we depart from the consequentialist view, we also see that the forward-looking and backward-looking responsibilities are closely related. It is because blame and praise can have a motivational force to take up one’s forward-looking responsibility that backward-looking responsibilities are being ascribed. Hence, forward-looking responsibilities translate into backward-looking responsibilities and vice versa.

Development of a New Sewage Treatment Technology: The Three Perspectives Applied

Now we have clarified the different approaches to ascribing responsibility, we can apply these to the field of technology development. I do so on the basis of an embedded ethical research that was carried out parallel to the technical development of a new sewage treatment technology (Zwart et al. 2006; Van de Poel and Zwart forthcoming). The idea behind this so-called embedded ethical research or ethical parallel research is that ethical investigations are carried out parallel to, and in close cooperation with, a specific technological R&D project. The ethicists interact with the technological researchers, allowing the ethicists to co-shape new technological developments. By applying the three responsibility perspectives (merit-based, rights-based, and consequentialist) to technology development, I explore the appropriateness of the different perspectives in engineering practice in terms of the two criteria formulated in the introduction of this paper.

Ethical Parallel Research Into The Upscaling Of The GSBR Technology

The ethical parallel research concerned the development of a new sewage treatment technology, the so-called granular sludge sequencing batch reactor (GSBR) (see Text box below for a description of the technology). In the technological project, different parties contributed, classified by the ethical parallel researchers according to their role in the project team. These were the role of researcher, technology producer (including activities like design and consultancy), user of the technology, and financer of the technology. The ethical parallel research consisted of a qualitative research, based on interviews, document analysis, attendance of technical meetings and the organization of an interactive session in the Group Decision Room (GDR; an electronic brainstorming facility) with the different stakeholders, where questions related to risks and responsibilities were addressed.

Text box Development of a granular sludge sequencing batch reactor (GSBR) (Zwart et al. 2006; Van de Poel and Zwart forthcoming)

One of the crucial elements in the development of the technology was the upscaling of the three-liter laboratory reactor to an outdoor pilot plant of 1.5 m3. This upscaling was partly based on several unproven assumptions about which microbiological mechanisms are at work. The ethical parallel research, therefore, focused on the question of how this incompleteness of knowledge was dealt with in the choice of scaling-up steps. Incomplete knowledge can lead to the introduction of certain risks, which may become manifest in the research done during the development of the technology, but also later in the eventual use of this technology. The aim of the ethical parallel research was to find out how risks and uncertainties are handled and how this is open to improvement.

During the ethical parallel research, it was observed that the risks due to so-called secondary emissions (i.e., unwanted but not yet regulated substances in the effluent) were not addressed by any of the engineers and researchers involved. The users of the technology delegated the risk of secondary emissions to the research phase, for which they were not primarily responsible, and most of the researchers allocated the risk to a phase for which they in turn bore no responsibility. Nobody therefore assumed responsibility for dealing with this risk. The argument put forward by the researchers and users was that the impact of the risks due to these secondary emissions was negligible and that problems were expected to be solvable in the next phase of the research. This was based on the presumed similarity between biological processes in traditional sewage plants and the biological processes in the GSBR technology. As a result, the issue who is responsible for checking or preventing secondary emissions never became an object of discussion. The ethical parallel researchers state that it cannot be concluded that “such emissions are a serious cause of concern; the situation is rather one of insufficient knowledge. Thus the question arises which of the actors in the network are responsible for reducing this knowledge deficiency, and which actors are responsible for reducing potential secondary emissions in case they turn out to be a serious concern” (Van de Poel and Zwart forthcoming). As a result of the ethical parallel research, the consultancy firm together with the university applied for additional funding to carry out research into the secondary emissions.

In the remainder of this section, I try to show how the different responsibility approaches can be applied to the development of this new technology and how these affect engineering practice, focusing on the issue of secondary emissions.

A Merit-Based Perspective on Harm Caused by the GSBR Technology

The first approach I discuss is the merit-based perspective on responsibility. In the section “Forward-Looking Versus Backward-Looking responsibility,” it was shown that, although focused on blame, the merit-based perspective implies the ascription of forward-looking responsibilities as well. It was argued that these forward-looking responsibilities are primarily derived from the duty of (reasonable) care. This means that people should take measures against foreseeable harm and possibly also look after as yet unforeseen harms. It is notoriously difficult to assess what “reasonable care” exactly amounts to in technology development, especially in the case of new and emerging technologies where the consequences are even harder to predict. A possible starting point for the evaluation of due care is the test of independent peers. If peers think that some negative consequences were foreseeable, we could probably conclude that the engineer(s) did not exercise due care.

Let us assume that the GSBR technology is being further developed and commercially exploited. Now suppose that secondary emissions, contrary to expectations, cause some problems for farmers who have their surface water treated with the GSBR technology. Can we point to some person or institution as being morally responsible for these problems? The ethical parallel researchers asked the developers of the technology whom they would ascribe moral responsibility for the secondary emissions to (in the sense of preventing or investigating the harmful effects). They did not get a unanimous answer: some ascribed the responsibility to the researchers at laboratory scale, some to the operators of the pilot plant and some to the users of the technology. Some even argued that no-one carries moral responsibility for these harmful consequences because “introduction of new technology introduces risks and we have to learn to live with that” (ibid.). The latter answer suggests that the principle of due care was not breached at all. However, the fact that some researchers from adjacent scientific fields did express their concerns about the technology (ibid., pp. 20–21) suggests the opposite. Apparently, before involvement of the ethical parallel researchers there was not enough incentive to take up the forward-looking responsibility to further investigate the potential risks of these secondary emissions, even though the researchers were aware of the lack of knowledge regarding these emissions. As such we could say that the duty of (reasonable) care was not fully exercised.

If we discuss moral responsibility in terms of the traditional criteria, probably no-one can be held morally responsible. Although the different actors all contributed to the development of the technology, we can not single out one particular actor or institution that individually carried out all necessary contributions to the outcome. Whereas the conditions, if applied to the complete research group, were fulfilled, probably none of the actors or institutions within the research group fulfilled all the responsibility criteria individually. Especially the knowledge condition, requiring that one can only be held responsible if one knew or could have known the negative consequences, is a problematic condition in this case. Since none of the actors took up the responsibility to reduce the knowledge deficiency regarding the secondary emissions, which would show that the secondary emissions are not as harmless as the technology developers thought they were, no further preventive measures were taken to reduce the risks. However, it is not clear who should have taken up this responsibility. The responsibility for this knowledge deficiency probably lies with the researchers, whereas the causal responsibility lies with the technology producers and users. Hence, if we apply the five conditions of the merit-based approach, nobody can be held responsible for the negative consequences (i.e., the secondary emissions) of the technology, even though the research team as a whole breached the duty of (reasonable) care.Footnote 13 In the literature this is called the problem of many hands, which is first defined as such by Thompson (1980).Footnote 14 It refers to the difficulty to identify, even in principle, the person responsible for some outcome, if a large number of people is involved in an activity. But sometimes it is the joined acting of individuals within a collective that bring about negative consequences, precisely because collectives can create potentially greater harms than individuals working independently. Acting on an individual basis, neither the water board nor the researchers could have built a treatment plant with the innovative technology but as a collective they were able to do so.

Some people therefore propose to hold the collective as a whole morally responsible. All individuals within the collective are held equally responsible (May and Hoffman 1991). This ascription of responsibility to the whole collective is criticized for being morally unsatisfactory. People are then being held responsible for the conduct of others, which is rendered unfair (Lewis 1991). This raises a fundamental problem for individual responsibility: either no-one can be fairly held responsible and hence the problem of many hands occurs, or moral responsibility is ascribed to the whole collective of people who in some way contributed to the outcome, leaving aside an individual assessment in terms of the responsibility conditions, which is rendered unfair. The latter holds especially if sanctions are coupled to the ascription of responsibility. After all, being part of a collective that caused some negative event does not imply that one’s individual actions were immoral or illegitimate and hence that one is eligible for blame.Footnote 15 We could also see this as a tension between what we owe to potential wrongdoers (not being blamed unless it is fair to do so) and what we owe to potential victims (to make someone responsible for preventing disasters). Although I think that individual responsibility should not too easily be dismissed on the grounds that individuals are powerless cogs in the machinery of their professional organization, the point remains that this traditional individualistic approach seems to put much more emphasis on what we owe to potential wrongdoers rather than on what we owe to potential victims.Footnote 16 Consequently, the problem of many hands is a serious threat to this approach.

This more conceptual problem of individual responsibility raises an important practical problem as well. Due to the inability to ascribe moral responsibility, an important opportunity for improvement is missed. Ascribing moral responsibility may lead to learning processes, which may ultimately prevent similar disasters from happening again in the future. If no-one can be held responsible, this opportunity for learning will not be fully exploited (Nihlén Fahlquist 2006a).

Summarizing, in the merit-based perspective on responsibility it is difficult to ascribe responsibilities. In the light of engineering practice, this approach seems rather powerless. In the extreme case, no-one learns from the mistakes being made and the development of the technology continues as if nothing happened. As a consequence, there is little incentive to take up the forward-looking responsibility to prevent negative consequences.

A Rights-Based Perspective on Harm Caused by the GSBR Technology

As said in the section “Three Perspectives for Ascribing Responsibility,” the rights-based perspective focuses on the task or obligation to set a situation right. With regard to the question of liability, all people involved in the project (including the end users) unanimously agreed that water boards using the new technology are legally liable when incidents (such as problems related to the secondary emissions) would occur.

If we apply the principle of strict liability, it is questionable whether institutions, such as the water board in the present example, will ever participate in innovative research projects. They will most probably be very reluctant in participating in the development of innovative and radically new technologies. Some scholars even argue that unrestricted liability would hamper any large-scale investment, also desirable ones (Perrott 1982). After all, existing problems sometimes require radical technological innovations (think of technological innovations relating to green energy). Technologies are primarily developed to “change positively the quality of life” (Berloznik and Van Langenhove 1998, p. 24), in the sense that they try to solve or reduce existing problems. In the development of new technologies trade-offs have to be made between competing values, in the GSBR case between sustainability and safety. The categorical rejection of the technology because it does not satisfy one of the demands is not a viable option, since this creates risks of its own (Sunstein 2005).

As explained in the section “A Rights-Based Perspective on Responsibility: The No Harm Principle,” the procedure of “informed consent” is introduced as a possible response to this problem: in case of risk for irreversible harm the principle of strict liability requires that consent of all people who are subjected to this risk be obtained. If this consent cannot be obtained, the risk should simply not be posed (Zandvoort 2008, p. 8). The fact that this approach takes seriously the perspective of potential victims of (high-risk) technologies is unmistakably a strength. The risks of these technologies cannot be imposed to anyone without his or her informed consent. Hence, an unfair distribution of risks by majority decision making is not allowed according to this approach. However, despite its democratic aim, this approach runs the risk of paralyzing the debate on potentially risky technologies. After all, the principle of actual consent implies that anyone has the right to veto against activities that impose risks, which ultimately creates a society of stalemates where nothing can be done, as Hansson argues (Hansson 2006, 2009). Informed consent is problematic if applied to affected individuals collectively. Zandvoort therefore discusses procedures to increase the willingness to consent (Zandvoort 2008). These are all based on monetary compensation (either directly or indirectly, such as the building of a new city theatre if the city consents to the building of nuclear plant in the neighborhood) or improvement of the credibility of risk assessment. It is striking that both approaches do not give any incentive to improve the technology itself. The focus is on ready-made technologies rather than participation in decision making process along the way of development (Hansson 2006, p. 150).

Summarizing, the rights-based approach emphasizes the right of people to be safeguarded from harm caused by others. However, the operationalization of this right by way of the principle of informed consent is problematic in the context of collective decision making. Moreover, the approach in itself seems problematic because of its focus on monetary compensation instead of improvement of the technology.

A Consequentialist Perspective on Potential Harm Caused by the GSBR Technology

The third approach is the consequentialist perspective, which is in fact the approach that was taken by the ethical parallel researchers. In the section “Ethical Parallel Research Into The Upscaling Of The GSBR Technology,” I discussed how the ethical parallel research influenced the development of the GSBR technology. The ethical parallel research led to the identification of gaps in the distribution of responsibilities, in particular the responsibility for secondary emissions. As a result, funds were acquired to carry out additional research into the secondary emissions. As such the analysis of the responsibilities by the ethicists led to an improvement of the division of labor amongst the technological researchers and engineers, which in its turn led to an improved technological design. The responsibilities were not distributed on the basis of fairness criteria but on the basis of efficacy (capacity, power, resources). By making the technological research team aware of the responsibility issues, some of the technological researchers took the initiative to incorporate the secondary emissions in the research project. As such the effect of the ethicists’ involvement on the engineering practice was not blaming or sanctioning but rather that of co-shaping. The ethical parallel research did not so much pose limits to the technology development but guided it.

Summarizing, since responsibilities are ascribed according to the criterion of efficacy, the problem of many hands does not manifest itself (or at least, not as severely as would be the case in a strictly merit-based perspective). By taking a consequentialist stance, the ethicists encouraged the engineers and researchers to improve the technological design.Footnote 17

The Three Perspectives Compared

If we compare the different approaches all three have their merits. The merit-based perspective emphasizes the fairness of a responsibility ascription. It takes seriously the moral question: who, from a moral point of view, is responsible? This moral notion of responsibility is in line with common morality, and especially in case of victims of irreversible harm, people will be interested to hear the answer.Footnote 18 We sometimes “want to ascribe responsibility to the person who is responsible—for example, someone who intentionally and culpably brought about an unwanted event—irrespective of the impact on future events of our responsibility ascriptions” (Nihlén Fahlquist 2006a, p. 17). The merit-based perspective does make a serious attempt to try to answer this question of “who is responsible?” However, this classical view on responsibility is based on an individualistic assessment of responsibility, as we saw, which makes it problematic in the context of collective action. Kutz argues that as long as individuals are only assessed in terms of the actions they produce, the disparity between collective harm and individual effect results in the disappearance of individual responsibility (Kutz 2000). And with the disappearance of responsibility, so goes the incentive for individuals to improve their behavior, he argues.

The question of “who is responsible?” was found to be less problematic in the rights-based approach, since it uses only the causal condition rather than the full range of responsibility conditions. With its focus on compensation and consent, this approach put most emphasis on the interests of potential victims. However, it was also shown that this approach gave no or only little incentive to actually improve technological design. Moreover, this approach seemed to have a hampering effect on the exploitation of innovative new technologies.

The consequentialist approach, as a third approach, appeared to be most powerful in terms of the second point identified at the start of the paper: the ability to shape the direction of technology development. It should be noted first that engineers themselves are often driven by a consequentialist heuristic of “problem solving” (Davis 2009). More than discussing who is to blame, they are guided by questions of how to prevent the (re-)occurrence of harmful events. This attitude of “problem solving” is necessarily context-specific. When engineers design a new technology they want that technology to work under real-world circumstances and not only in a laboratory. They therefore engage in extensive studies of errors and mistakes. As Davis puts is,

Whatever is true of other professionals, engineers consider it their responsibility to study any disaster that seems to arise from what they did – and to report what they find. To commit a certain mistake once, even a serious one, is something engineers tolerate as part of advancing technology (…). What engineers do not tolerate is that an engineer, any engineer, should make the same mistake. Once a mistake has been identified, the state of the art advances and what was once tolerable becomes intolerable (a kind of incompetence). (…) Engineering is unusual among professions in recognizing an obligation to ‘acknowledge their errors’. (Davis 2009)

We could say that the consequentialist perspective is most typical of the engineering practice itself. The background question is always “does it solve the problem at hand?” By focusing on real issues rather than abstract duties or principles the impact on engineering practice is also more sensitive to the context in which technology development takes place.Footnote 19 If a certain responsibility ascription does not lead to the desired solution to a real problem, this responsibility should not be imposed or should be imposed differently. Compare this with the rights-based perspective that focuses solely on the question whether or not readymade technologies are harmful. The rights-based perspective seems to influence not so much the direction but rather the pace of technology development.

Second, the consequentialist approach allows for more fine-grained responsibility ascriptions. Since the merit-based perspective is often applied after the fact (i.e., after something undesirable has happened), the question of responsibility becomes a matter of all-or-nothing: one is either responsible for the undesirable outcome or not (Goodin 1985; Bovens 1998; Lynch and Kline 2000). Some therefore argue that this merit-based perspective is about nonresponsibility: it defines excusing conditions that exempt people from responsibility (Ladd 1989). However, recent insights from Science and Technology Studies (STS) show that before dramatic cases occur, often incremental small decisions have to be made that ultimately lead to undesired outcomes. Instead of focusing on blame for these—sometimes catastrophic—events, engineering ethics should pay more attention to the “complexities of engineering practice that shape decisions on a daily basis”, STS scholars argue (Lynch and Kline 2000), in order to modulate technology into the desired direction (Bovens 1998; Swierstra and Jelsma 2006; Van de Poel and Van Gorp 2006). The consequentialist responsibility ascription is based on the capacity of each agent to contribute to the shaping of technology. After all, within the consequentialist perspective, with its criterion of efficacy, responsibilities ought to be ascribed according to the capacity of each agent to discharge them. This is in line with the common intuition that having the capacity, power, and resources to contribute to the solution of a social problem, entails a forward-looking responsibility to do so (Nihlén Fahlquist 2009). For example in case of risky technologies, engineers, more than any stakeholder, have the knowledge of the risks and possible ways to reduce them. From the consequentialist perspective this entails the responsibility to address these risks. This responsibility ascription, then, is not derived from a merit-based view in which particular actions are deemed faulty, but rather from the set of obligations to see to it that a certain state of affairs is brought about (i.e., a situation in which risks are prevented or at least addressed properly). This approach to ascribing responsibility fits nicely with the insights from more sociologically oriented literature on the dynamics of engineering and technology development.

However, efficacious as it may be, the fairness of the responsibility ascription cannot be ignored all together. This brings us to the other requirement of appropriateness: the question whether or not the responsibility perspective reflects people’s intuitions of when it is justified to ascribe a certain responsibility. It is unlikely that a purely consequentialist approach is psychologically feasible. The motivational force of responsibility ascriptions that are inconsistent with basic intuitions of fairness will therefore be undermined (Kutz 2000, p. 129). This is in line with the point made in the section about the relation between forward-looking and backward-looking responsibility. The motivational force to take up one’s forward-looking responsibility is partly derived from expressions of praise and blame. The researcher in the GSBR project who judges his or her own responsibility within the project as fair will be motivated to act according to it, whereas the researcher that is assigned a responsibility unfairly will potentially be inclined not to act according to it or to do it less carefully.Footnote 20 Moreover, from a moral point of view it is also undesirable to ascribe responsibilities in ways that contravene our basic feelings of fairness. Even if fairness is not the overriding criterion, we do not want a responsibility ascription that is morally unfair—both for the victims and for the people who are potentially blamed. Hence, even though fairness is not the ultimate criterion in the consequentialist perspective, it should still somehow be taken into account. Especially in case different people are involved there can be a tension between the requirement of efficacy and that of fairness. Whereas the fairness requirement is somewhat restrictive in ascribing responsibility, the efficacy requirement seems to have the opposite effect. It broadens rather than narrows the scope of responsibility ascriptions. If we focus on the fairness criterion, we probably end up with an ascription of responsibilities which is undesirable from a consequentialist perspective. If we only stress the efficacy of the responsibility ascription, we probably end up with an unfair distribution of responsibilities. Hence, we somehow have to incorporate both perspectives if we ascribe responsibilities.

A possible way to reduce the tension between the requirements of fairness and efficacy, is to focus on alternative fairness criteria (i.e., criteria that are not related to the traditional substantive fairness criteria for individual responsibility). Insights from political philosophy show that fairness could also be achieved in a more procedural way. According to a procedural approach to fairness, a responsibility distribution can be rendered fair if it is established in a fair way (i.e., if it is the result of a fair procedure). Further research is needed to explore this procedural approach to fairness. A possible starting point may be the Rawlsian approach of Wide Reflective Equilibrium (WRE), according to which a procedure can be justified as fair if it fits within the individual set of background theories and moral principles of each relevant actor involved. The establishment of this procedural fairness could be part of an embedded ethical research (Doorn forthcoming b; Van de Poel and Zwart forthcoming). Questions as to which actors are relevant to include and how to assess such a WRE need to be further explored (Doorn forthcoming a).

The discussion above indicates an important role for the ethicist in the process of distributing responsibilities and identifying potential (negative) side-effects and consequences. The obvious question is then how this approach would work in case the technical work is not paralleled by an ethicist. I think we have to make a distinction between two situations. The first is one where a group of researchers have currently no embedded ethicists in their project but who have some experience with ethical parallel research in previous projects. In this case the researchers have experienced how ethical research could be carried out. It is a challenge to sustain this “ethical attitude” in future projects. This is a challenge that somehow should be considered already during the ethical parallel research itself. The future will tell to what extent the impact of the past ethical parallel research will indeed lead to more permanent ethical reflection by the engineers themselves during their work. It goes without saying that the ethicists aspire that their involvement is not just a passing phase and that they want an enduring impact on engineering practice. Further research into the different methods for doing ethical parallel research and possible ways to sustain its impact is therefore required.

The most common situation, however, is one where the research team has never been paralleled by a team of ethicists. How to make sure that ethical reflection is also incorporated in the work of these teams? Let me start by saying that there is a positive trend in requirements by funding organizations. It is nowadays often required to have a paragraph on ethical, legal, and social aspects (ELSA) in funding proposals. Although this attention for ELSA still runs the risk of being nothing more than “checkbox ethics,” it points to a direction of more awareness for the social implications of technology. In addition to this requirement from funding organizations, (prospective) engineers should be trained in recognizing moral issues during their professional work. Engineering ethics should therefore be part of every engineering curriculum. Whether this will make the role of the ethicists completely replaceable is doubtful, but it will probably make engineers more prone to inviting ethicists in their project if they need their advice.

Conclusions

In this paper I discussed three “responsibility perspectives” in the light of the development of a new technology. It was found that the merit-based perspective was rather powerless to the engineering practice because of the problem of many hands. As a result, opportunities for learning and improvement were not optimally used. The rights-based perspective appeared to be most pessimistic about technology development. Due to its focus on monetary compensation, the effect of this approach on technology development was rather restrictive. Funding organizations and commercial partners would probably become reluctant to sponsoring innovative research. Moreover, it did not provide a strong incentive to improve the technology itself. The effect of the consequentialist perspective on engineering practice was most profound. This approach allowed for more fine-grained responsibility ascriptions and was found to fit nicely with insights from STS literature.

Although the consequentialist approach was found most powerful in co-shaping the direction of technology development, it was argued that the fairness requirement could not be ignored all together. It was shown that, for both moral and a consequentialist reasons, responsibility ascription should reflect our basic intuitions of when a particular responsibility ascription is justified. Since there is a potential tension between the traditional fairness criteria and the criterion of efficacy, it was proposed to conceive of fairness in a more procedural rather than substantive way, in order to reconcile the two demands of responsibility ascriptions.