Europe PMC

This website requires cookies, and the limited processing of your personal data in order to function. By using the site you are agreeing to this as outlined in our privacy notice and cookie policy.

Abstract 


In recent years, several authors have argued that the desirability of novel technologies should be assessed early, when they are still emerging. Such an ethical assessment of emerging technologies is by definition focused on an elusive object. Usually promises, expectations, and visions of the technology are taken as a starting point. As Nordmann and Rip have pointed out in a recent article, however, ethicists should not take for granted the plausibility of such expectations and visions. In this paper, we explore how the quality of expectations on emerging technologies might be assessed when engaging in a reflection on the desirability of emerging technologies. We propose that an assessment of expectations' plausibility should focus on statements on technological feasibility, societal usability, and desirability of the expected technology. Whereas the feasibility statement and, to a lesser extent, the usability statements are frequently quite futuristic, the claims on desirability, by contrast, often display a conservative stance towards the future. Assessing the quality of expectations and visions on behalf of emerging technologies requires, then, a careful and well-directed use of both skepticism and imagination. We conclude with a brief overview of the tools and methods ethicists could use to assess claims made on behalf of emerging technologies and improve the ethical reflection on them.

Free full text 


Nanoethics. 2011; 5(2): 129–141.
Published online 2011 Jul 9. https://doi.org/10.1007/s11569-011-0119-x
PMCID: PMC3166601
PMID: 21957435

Assessing Expectations: Towards a Toolbox for an Ethics of Emerging Technologies

Abstract

In recent years, several authors have argued that the desirability of novel technologies should be assessed early, when they are still emerging. Such an ethical assessment of emerging technologies is by definition focused on an elusive object. Usually promises, expectations, and visions of the technology are taken as a starting point. As Nordmann and Rip have pointed out in a recent article, however, ethicists should not take for granted the plausibility of such expectations and visions. In this paper, we explore how the quality of expectations on emerging technologies might be assessed when engaging in a reflection on the desirability of emerging technologies. We propose that an assessment of expectations’ plausibility should focus on statements on technological feasibility, societal usability, and desirability of the expected technology. Whereas the feasibility statement and, to a lesser extent, the usability statements are frequently quite futuristic, the claims on desirability, by contrast, often display a conservative stance towards the future. Assessing the quality of expectations and visions on behalf of emerging technologies requires, then, a careful and well-directed use of both skepticism and imagination. We conclude with a brief overview of the tools and methods ethicists could use to assess claims made on behalf of emerging technologies and improve the ethical reflection on them.

Keywords: Emerging technologies, Ethics, Expectations, Plausibility, Technology assessment, Techno-moral change

During the last decades Western societies have become aware that it is important to assess the desirability of new and emerging technologies early in their development. The rationale for this is that it is preferable to anticipate developments when they are still malleable, even if at this stage the future is still unknown and uncertain [5]. Especially in the developed West, governments have therefore started allocating budgets to study the ethical, legal and social implications of emerging fields like genomics and nanotechnology. Several forms of technology assessment (TA) have been developed as early stage appraisals of emerging technologies and academic scholars have contributed with conceptual and methodological studies (see for example, Constructive TA [24], Real Time TA [12], Midstream Modulation [8]).

Such forms of ‘upstream’ TA by definition have an elusive object: a technology that is only emerging and thus, as yet, exists mainly in the form of visions, promises and expectations. These may pertain to how the technology will look like in the laboratory, how it will perform on the market, how people will use it, profit from it, and how human life will be improved as a result. Unfortunately, experience attests that these expectations1 at best provide a shaky basis for deliberation and decision-making. All too often expectations are colored by the strategic aim to mobilize support and funding [4, 30], they are difficult to check by non-experts, and last but not least: they usually prove to be false or incomplete [7]. As soon as a technology leaves the lab and enters society, it invariably turns out to do more, or less, than expected. We have learned to deal with unforeseen and unintended consequences. Nonetheless, no one aiming to reflect on the desirability of an emerging technology can avoid dealing with such visions and expectations. This raises the question we want to explore in this article: in the context of an early ethical reflection on the desirability of new and emerging technologies, how to assess expectations and promises on emerging technologies?

To address this question, we take our cue from two papers by Alfred Nordmann [19] and Nordmann and Rip [20]. They argue that nanoethics tends to go along uncritically with technological expectations and promises, when pointing out the ethical concerns that these supposed developments raise. On this premise, they conclude that an ethical reflection on emerging technologies should start with ‘reality checks’ ([20], p. 274): ethicists should always start with discussing ‘the quality of promises’ on nanotechnology.

We agree with both authors that the good quality of technological expectations should not be taken for granted in the ethics of nanotechnology or, more in general, new and emerging technologies (NEST). However, an assessment of the quality of such expectations requires an ‘epistemological analysis of uncertain futures’ ([11], p. 99), which is more complex than their proposal of ‘reality checks’ suggests. In this article, we will elaborate two critical points. First, since expectations on NEST are strategic, an appraisal of their quality may benefit from an analysis of their rhetoric. This redirects the assessment of expectations to the domain of plausible discourses for a specific audience and highlights some structural features of expectations statements. In particular, expectations on NEST do not only contains statements on technical feasibility, but also on social usability, and (moral) desirability. The ethicist should assess these three claims, each of which requires different strategies.

This exposition leads to our second critical point: Nordmann and Rip’s plea for ‘reality checks’ is grounded on the premise that many expectations about emerging technologies are too speculative. And indeed, this is probably true for the claims about technical feasibility and—to a lesser extent—for the claims about social usability. But as far as the claims about (moral) desirability are concerned, the problem is not a lack of realism, but a lack of moral imagination. Expectations on behalf of emerging technologies rarely take into account that morality, like technology and society, may change. Therefore, the task of the ethicist is also to spur the imagination of potential changes in morality bound to the expected technology.

Our contribution proposes some tools to improve the epistemic conditions for a normative reflection on the desirability of new and emerging technologies. In doing this, we address, on the one hand, the (often acknowledged) need ‘for developing criteria and procedures’ to avoid ‘mere speculations’ (Grunwald [11] p. 95)2 and, on the other hand, the (often neglected) need for encouraging moral imagination in actors deliberating on emerging technologies.

A ‘Plea for Less Speculation’ in Ethics

Nordmann and Rip warn that in the case of an emerging technology like nanotechnology ‘ethics leaps ahead of science’. They refer to the tendency of ethicists to begin their analysis by hypothesizing futuristic, visionary technologies and then to conclude by presenting the hugely existential ethical issues raised by these new technologies. In doing so, however, the hypothetical stance is usually downplayed and speculative scenarios are presented as imminent and pressing. This is what Nordmann refers to as the ‘if and then’ fallacy [19]. As an illustration, Nordmann discusses ethical debates related to brain–computer interfaces. These arguments begin with postulating that such interfaces might become widespread in society, only to turn immediately to discussing ethical concerns regarding the enhancement of human nature. In the process, this type of ethics fails to address the feasibility of the technology in question. Furthermore, it discards the hypothetical status of the premise: “might” becomes “will”. While feeding unjustified hopes and fears, these discussions neglect more pressing ethical issues.

Nordmann argues that ethicists too often adopt technology developers’ expectations without a critical appraisal of their content. As a result, the quality of the resulting ethical assessment is questionable and more pressing ‘here and now’ issues fail to draw the attention they deserve. By drawing the public gaze towards unrealistic scenarios, ethicists unwittingly contribute to turning strategic promises about technologies (which may be hyped) into generally shared expectations. In this manner, the ethicist, rather than acting as a critical force, ends up supporting specific interest groups. In order to avoid this, ethicists should be more critical with regard to promises and expectations about emergent technologies and avoid building their analyses on unwarranted and strategic claims.

Nordmann and Rip present two strategies to bridge the gap between futuristic ethical speculation and actual technology development. First, ethicists should focus on specific (nano)technologies rather than on general technological fields: different devices and technologies raise different ethical challenges. Second, ethicists should check the information they receive about the future of nanotechnology. Nordmann and Rip claim that such ‘reality checks’ should precede any ethical evaluation of (nano)technology. While we agree on their first point that ethicists should avoid to reflect on an abstract “Technology” and prefer instead a situated analysis of specific technologies, we find the appeal to “reality checks” more problematic.

According to Nordmann and Rip, the ethicist delivers “reality checks” by tracing the sources of expectations and the accountability of people who make claims on behalf of nanotechnology. In this way, ethicists assess expectations’ quality in order to establish whether they refer to feasible technical developments worthy of ethical reflection. Therefore, such a reality check would close the gap between ethical scenarios and current science. As Nordmann explains, ‘nanotechnological and other techno-scientific prospects suffer from the failure to distinguish physical possibility (all that does not contradict outright the laws of nature) and technical possibility (all that humans can build)’ ([19], p. 4). By assessing the quality of expectations the ethicist critically appraises the conditions of technical possibility rather than taking them for granted.

Although Nordmann and Rip rightly point at the need for assessing the quality of techno-scientific expectations, their proposal for “reality checks” assumes that assessing techno-scientific expectations is mainly a question of assessing technical possibility. In the following, we will argue that the strategic and rhetorical role of expectations should also be taken into account. In doing so, dimensions different from technical possibility come into view.

The Rhetoric Character of Expectations

Techno-scientific expectations are ‘future orientated abstractions’ [3] that describe an individual or collective belief in the possibility of a certain state of affairs to come into being. They are often expressed in the semantics of intentions, goals, hopes, or proposals. Expectation statements can be found at different levels and stages of the technological innovation process, for example in the agenda setting of local laboratories, in the resource allocations of a techno-scientific field, or in spokespersons’ press communications [30].3 The type of expectations most likely to raise ethical concerns are those ‘broad and diffuse promises (that) are able to justify work on the technology “as such”’ (p. 182), circulating ‘in the media, university press offices and the scientific community’ ([20], p. 274) with a strategic role of attracting consensus and resources [4]. Such expectations outline scenarios of technological and societal trends and are usually considered as “taken for granted background” ([30], p. 184) rather than as issues for discussion. Both their apparent uncontroversial status and their circulation in broader, lay and non-specialist contexts, make these statements a springboard for the ethical analyses criticized by Nordman and Rip.

The idea of “reality check” suggests that ethicists had better go beyond the rhetoric of such expectations and assess whether they are reliable given the current state of the art. However, this focus on ‘reality’ neglects that an analysis of expectations rhetorical character might provide a fruitful starting point rather than a limitation for their assessment. First of all, a perspective focused on rhetoric fits much better with the prospective character of expectation statements than a realist perspective. Secondly, an analysis of the rhetorical structure of expectation statements can point out specific features that may serve as a starting point for assessing such expectations.

To start with the first point: since techno-scientific expectations are prospective, future oriented statements, whether or not they are ‘true’ can only be assessed in hindsight—if ever. This is not helpful for the ethicist who wants to reflect on emerging technologies. This is where the rhetorical tradition may prove helpful. In this tradition, a statement is “plausible” or “likely to be true” 4 in a certain context if a specific audience considers it as apparently valid and credible. Even without being able to assess the truth of a claim, people in an audience will consider a claim plausible if they can attribute meaning to it and believe that it is convincing [22]. Assessing the plausibility of expectations means, then, that one should explore how expectations are constructed and how and why different audiences, situated at a particular point in time and space and within a specific background knowledge on the topic, perceive them as (im-)plausible.

The second advantage of a rhetorical approach is that it can lay bare some of the structural features of expectation statements. Let’s take two examples of such broad expectation statements, both in the field of molecular diagnostics and widely published in the media:

In critical care settings, such as Emergency Departments within hospitals, there is a persistent clinical need for diagnostic solutions that enable fast and accurate patient triage—for example, diagnosing acute coronary syndromes (e.g. a heart attack) to enable faster treatment and improve patient outcomes. The fully automated handheld testing devices that will be developed jointly by Philips and bioMérieux will be immunoassay-based and employ Philips’ new Magnotech biosensor platform. They are intended to assist clinicians in time-critical decision-making by reducing delays involved in laboratory-based testing.5

Every year more than 10,000 couples in the Netherlands apply for help because of involuntary childlessness. A sperm analysis is typically the first step of fertility research. Testing sperm quality requires stringent pre-test preparations and a specialized laboratory. Tests often have to be repeated two to five times for sufficient reliability. If men can carry out the tests in the privacy of their own home this makes the procedure much less awkward for them. Moreover, the probability of a reliable diagnosis is increased as well. Finally, the researchers think that the costs for health insurers can be decreased too […] On the new chip, the spermatozoa flow through a fluid channel, above which electrodes are fitted. When a cell flows under this ‘bridge’, its electrical resistance changes momentarily, and this event is counted. […]The user will only be able to see that the test has been completed successfully; the gynecologist will inform him of the actual results personally.6

When we look at the structure of these expectations, there are some similarities. In both cases the expectation statements rest on three interrelated types of claims:

  1. claims about the characteristics and functioning of the technology;

  2. claims about how the technology will be adopted by the intended users and how it will be integrated in current (medical) practice;

  3. claims about how the technology will address a social problem or need. (In both quoted fragments, this claim is situated at the very beginning as a preliminary justification of the desirability of the technology).

Although the precise rhetorical structure of expectations in specific cases can vary and the identification of the different claims is not always clear-cut, this distinction serves the analytical function of singling out different, but often entangled messages implied in techno-scientific expectations. In particular, it shows that such expectations claim much more than just technical possibility. If this is the case ‘responsible discourse on societal and ethical aspects’ [19] should not only assess claims on technical feasibility. The claims on usability and desirability of the expected technology should be assessed as well. In the next section, we will see what this means in practice.

Unraveling Expectations

We have pointed out three different claims that can be distinguished in broad expectation statements to a lay audience. This means that, in the illustrative case of the expectation on the “fertility chip” (the second quotation above), ethicists should assess not only the conditions for the technical possibility of the chip, but also the conditions for its usability and desirability. For each of these aspects, we cannot assess their objective reality or truth, however we can analyze to what extent and at which conditions a specific audience, holding a more or less specific knowledge on the topic, considers these expectations plausible.

A: Technological Feasibility: Eliminating Strategic Distortions

The first question when assessing the technical feasibility of emerging technologies is: shouldn’t the ethicist trust the scientific experts? Why not simply believe researchers working on the fertility chip when they expect that their chip will eventually assess semen concentration and motility more reliably than the current microscope-based technique?

There are two reasons for distrust. First, the history of technology development illustrates that many such claims by scientists never materialize. Moreover, the sociology of expectations [4] points out that such claims are often motivated by the strategic aim to attract support and funding. Ethicists facing public expectations on emerging technologies have to keep in mind that: a) when speaking in public scientists have to adapt to the audience’s background knowledge, and b) in a specific context scientists might have strategic reasons to make their work acceptable and important for society, framing what they do within a storyline that is meaningful for the audience. Thus, the aims of the actor who states expectations and the audience that she has to reach are two important factors to take into account when analyzing expectations’ plausibility. Whether expectations appear in newspapers or TV shows, funding proposals, scientific journals or informal research meetings, they are constructed to be convincing for a specific audience and goal.

Thus, ethicists should address questions like: What is the background of the person uttering an expectation? What is the audience s/he is addressing? What could be the strategic role of the expectation under examination? This contextualization allows the ethicist to point out some discursive strategies used by scientists to persuade the audience of the feasibility of the technology: for example, the emphasis on some particular characteristics of the chip—like its reliability and portability, or the joint presentation of some challenges for the chip development and the planned strategies to address them,7 or the stress on the reasons for trusting the success of the technology.8

In particular, scientists might justify their expectations on some future technology on the basis of the state of the art (like their previous success with the lab-on-a-chip platform), scientific evidence (the experimentally proven ability of the chip to distinguish between different particles) and the scientific paradigm (concentration, motility and shape of spermatozoa as indicators of sperm fertility). At the level of public communication these justifications for trusting expectations might not be discussed because there is an “epistemic asymmetry” [25] between the speaker and the audience. However, it is possible for the ethicist interested in emerging technologies to investigate expectations at the locations where scientific development takes place, like scientific journals and laboratories. In those settings researchers relate to their peers and try to gain their acceptance [9].

As laboratory studies have shown, the laboratory is the place where controversies and compromises take place before scientific facts are created and communicated to the outside world [14]. In this “protected” space, the discursive strategies, used by researchers to build their arguments and convince their “internal” audience, are different from the ones that are employed to reach an “external” audience. Thus, while scientific articles describe the outcomes of scientific research as “facts” abstracted from modality and history, the laboratory offers an interesting access to expectations, where the materiality of the technology is still visible, together with the uncertainties, challenges, controversies and doubts. In such a context, an ethicist comes across controversies and uncertainties that are covered-up in more public arenas, and may analyze the conceptual and material building blocks of expectations.

Let’s take again the example of the fertility chip. The fertility chip is expected to measure the change in resistance when a particle, say a spermatozoon, passes through two electrodes. Such a disturbance on the field line results in a peak in the signal. By counting the peaks, it is possible to determine the concentration of spermatozoa in the samples, which is their number per milliliter. Between this physical principle based on Ohm’s law and the final performance of the chip, there are many other considerations, physical principles, technical and biological constraints and scientific uncertainties that play an important role in the development process. For example, how are researchers’ expectations on the measurement of an electrical field’s disturbance affected by the necessity to operate with nano-quantities of fluid? What are the challenges they encounter when they want to include the sensor in a “user-friendly” device? And how do scientific controversies on the best classifications and reference values for assessing semen quality affect their decisions?

On the laboratory floor, the ethicist can explore techno-scientists’ expectations in a context that is less tainted by strategic considerations. This helps in pointing out uncertainties, scientific controversies and additional challenges that might constrain or redirect the development of the technology. Such an awareness of the state of the art should reduce wild speculations on the technology’s potential. Furthermore, by understanding and analyzing researchers’ expectations on how the technology will work, the ethicist can identify details that are relevant for prospective thinking on the device’s performance outside the lab. For example, the expected fertility chip, despite its accuracy in comparison to other “pre-scanning” devices for male fertility, is presented by researchers as not accurate enough to replace more decisive laboratory tests or to provide attainable results with only one measurement. What does that mean for its usability?

B: Societal Usability: Incorporating User Perspectives to Assess Implicit Scripts

Exploring the performance of an emerging technology in the context of the lab is only one step to assess expectations. Even if the laboratory is not a purified space of “facts” production, but rather a material environment of controversies and social practices, the expectations on the performance of a device are somewhat reduced to technical conditions. However, technologically possible expectations might imply impossible social conditions. The next step, then, is to analyze the expected use of the technology and its performance in societal practices. In the case of the fertility chip, for example, it is expected that ‘men can carry out the tests in the privacy of their own home […] The user will only be able to see that the test has been completed successfully; the gynecologist will inform him of the actual results personally’.9 This vision of future medical practice is built on some presuppositions about the way society works and about the practices in which the technology is supposed to be embedded. Along the lines of “script theory” [1], we can reconstruct the network of relations designed around a technology and the assumptions about the environment in which the artifact will be performing. These assumptions and the network of relations envisioned in expectations of future technologies have been referred to as a “fictive script” [6].

Uncovering the fictive script of an emerging technology means to inquire into how researchers’ envision the context of application of the technology and how it will be handled by doctors, insurance companies, etc., and finally the patient. For example, the intended users of the fertility chip are both men, who want to test themselves for infertility, and gynecologists, who will inform them of the result. How will men get the chip? What social structure should be in place for this purpose? What will the chip look like when handed over to the patient? What is the sequence of actions the patient should perform at home? How many times should sample testing be repeated? What should the gynecologist do when the patient brings in the semen analyzer? In which cases is the result considered reliable? Addressing these questions means to produce detailed descriptions of the envisioned use of the device beyond the lab walls. Such “thick descriptions”10 are a tool to imagine the device in the social context and to point out the social conditions that are implicit in the expectations on an emerging technology.

By adding details on the expected context of use and on the users imagined by researchers, a more concrete scenario is outlined. The next step is to investigate to what extent these scenarios are plausible for an audience of gynecologists, nurses, GPs, clinical chemists, patients and other actors who are engaged in relevant existing practices, and have some expertise on how they work. Their knowledge and expertise can be collected with methods developed in “Constructive Technology Assessment” practices, including interviews or workshops with various experts that are or will be related to the technology at stake and (potential) users groups [23]. This input may then help to assess the social conditions implicit in the visions and designs of the technology developers. It could serve to rule out particular presuppositions or visions because implausible for some stakeholders who have a better expertise in a certain social practice than technology developers, or it could help to point out where changes in current practice would be necessary. Interactive workshops with a broad set of stakeholders may help to explore how, for the envisioned technology to function in the expected social setting, institutional changes might be needed or, on the contrary, how current institutional dynamics and configurations might dictate some technical requirement. For example, when we reconstruct how the semen analyzer is supposed to be used in the practice of testing male fertility, we can point out the different steps that users have to go through. Then, based on stakeholders’ expectations, one can hypothesize how the current logistical system for testing of male fertility should change to adapt to this new device; or how some institutional/economical/legal resistance can be expected to prevent the expected use of the device in the medical practice, or even to encourage its use in another practice, for example to assess bulls’ fertility in the deeds of sale in the animal industry.

Analyzing the expected use of the emerging technology by investigating what is considered as plausible by potential users, who are not directly involved in the development of the technology, is both a critical and a constructive activity. By discussing the envisioned technology with intended users and other stakeholders, it is possible to point out where developers’ visions collide with other actors’ views and beliefs. But it also adds new perspectives and enriches the initial visions by imagining how current practices might change, spurred by social dynamics and users’ creativity.

C: Desirability: Exploring the Assumed Morality

The third and last dimension of technological expectations is the appeal to desirability: the new technology is claimed to satisfy individual and/or societal needs or to solve a problem (of an individual, of society, humankind or even the world at large). Assessing these claims builds on the previous two analyses, but now the focus is on how the impact of a technology will be normatively evaluated. What is scrutinized here is how the morality implied in an expectation is accepted as plausible according to the morality of different audiences of such expectation.

In techno-scientific promises, the “benefits” of the technology are unquestionably presented. Their desirability is often asserted as patent. All these claims on a technology’s “benefit” carry along certain systems of values and morality. For instance, the expectations about the fertility analyzer hint at very general values that may seem hardly contestable, like privacy, autonomy and cost-efficiency. Which man would not appreciate his privacy to be respected, his intimacy to be increased, and his comfort to be improved when testing his own fertility? Which society doesn’t want expenses to be reduced and quality of health care increased? And which gynecologist wouldn’t welcome her work to be easier and her patient happier? However, as [18] shows for the case of nanomedicine, on further inspection such general claims often prove quite vague and ambiguous, or even inconsistent, because they are entangled in multiple views of what is ‘good’. A philosophical analysis, aimed at “disentangling” the concepts and values lurking behind the eye-catching slogans, is needed in order to enable a fruitful debate on the desirability of the new technology. Such analysis, for instance, would unravel the promises that the fertility chip will improve the current medical practice or will protect the privacy of the patient, analyzing the assumed concepts of “private” or “good care” and assessing the internal consistency of these promises.

Beside the inherent strategic ambiguity in the normative stance of many promises on NEST, an analysis of desirability claims should also take into account how these claims are appraised among audiences with diverse normative stances. This point is highlighted by the type of Technology Assessment known as “Vision Assessment” [10]. According to this approach, there are some values, norms and views of a good life that are implied by expectations and visions on NEST. To improve political deliberation on NEST, these normative worldviews should be explicated and analyzed by inquiring the normative perspective of different stakeholders and pointing out eventual controversies. Normative controversies on the benefits of an emerging technology show that not every potential user or stakeholder deems plausible techno-scientists’ universal claims on desirability. The allegedly universal benefits that a technology is supposed to bring about are, de facto, not unanimously shared or understood. Drawing again on the case of the fertility analyzer, we could explore the values implicit in different stakeholders’ expectations. For example, the reasons why engineers consider the technology “good” might not match with doctors’ and patients’ considerations: whereas engineers might be concerned about the accuracy of the test, doctors might be more focused on their possibilities to control the result of the test, while patients might emphasize the portability of the device. This diversity in what is considered ‘good’ about the chip may hide potential controversy about what would be the best use of the chip. Some patients could claim, for example, their freedom not to go to the gynecologist’s for additional testing, whereas the doctors may claim responsibility to assess the quality of the test.

Potential future controversies around the desirability of the technology can also be imagined actively and systematically by using patterns of moral argumentation known from earlier debates on emerging technologies. As Swierstra and Rip [26] have shown, such debates often have a ‘grammar’ that can be used to speculate in a controlled way about the way future debates on newly emerging technologies might evolve. For the case of the fertility analyzer the patterns highlighted by Swierstra and Rip suggest that future controversies might emerge about, for example, the equality of opportunities for men and women, the medicalization of everyday life, or the reduced quality of healthcare practice. By exploring the implicit conflicts of values between developers and users or among users, and by systematically imagining future ethical controversies, it is possible to assess how robust the initial, apparently uncontroversial expectations really are.

This analysis of normative controversies—inherent in the promises of the desirability of emerging technologies, or implicit in stakeholder’s expectations, or likely to emerge in future ethical debate—shows that the allegedly universal worldviews offered in promises can be scrutinized according to the normative stance of situated audiences. Moreover, these desirability-claims should also be assessed with respect to their vague assertions that the technology in question will have some intended (desirable) impacts on society and individuals. The exercise of developing “thick descriptions” of the future practice, discussed in the previous section, comes to hand here. Such descriptions offer a situated context in which we can develop ‘an open eye or a keen sensitivity’ [16] for how the expected technology may have an impact on people’s daily life, practical routines, experience of their bodies, power relations, allocations of roles and responsibilities—the so-called ‘soft impacts’ of the technology [27]. For example, if the user of the fertility analyzer has to test himself constantly for one month and wait for the gynecologist’s verdict, can we still deem this device user-friendly? And if the user has to handle the analyzer in the private of her house, which new obligations raise from this gained autonomy? Recent research in empirical philosophy of technology provides ample examples of how to bring into view the often very mundane, but nonetheless very real impacts of novel technologies [2, 17, 31].

These philosophically-oriented studies, together with a more sociologically-oriented research on technology’s “users” [15, 21], show that practices may change in unexpected ways because of the introduction of a new technology. These phenomena have to be taken into account when appraising the expected desirable impact of the technology on human practices. The fertility analyzer, for instance, while changing the practice of testing for male fertility might also affect the feelings and the behavior of the users’ partners and this could, in turn, have unintended consequences on the way the device is used. Exploring such unexpected turns of events broadens developers’ and stakeholders’ visions of future techno-social practices with phenomena that are plausible in the context of history of technology: as history shows, it is an exception rather than a rule that users employ artifacts solely in the ways intended by the designers.

These unexpected events, however, become less surprising if we frame them in the context of the phenomenon of techno-moral change [27]. If it is true that technology, society and morality are bound up together, then changes in one are likely to invite changes in the others [13]. Established norms and values inform technology development, but new technologies also create new problems, opportunities and constraints the existing moral resources cannot cope with. We have to seriously consider the possibility that envisioned technological developments affect (and interact with) moral developments. History is full of examples of techno-moral change. The telescopes enabled Galileo’s discovery that the Earth revolved around the sun—a discovery with profound religious and anthropological repercussions. The automobile enabled people to escape the traditional moralities of their villages. The contraceptive pill not only substantially contributed to women’s emancipation, but also to that of homosexuals, as this technology severed the ‘sacred’ link between sex and procreation that was one of the main grounds for moral condemnation of homosexuals. The cell phone has revolutionized notions of privacy and availability; and so forth. Techno-moral change is not an exceptional event; it is to be expected. Assessing the desirability of visions of the future on the basis of the morality of today would therefore be unduly conservative.

It is striking, however, that the morality underlying technological expectations and visions is often all too familiar. As shown above in the case of the fertility analyzer, visions of emerging technologies center on values like health/wellbeing, safety and the environment—the usual suspects, so to speak. These visions do not take into account that the meaning of these values or their relevance may be liable to change [28]. Thus, whereas feasibility and, to a lesser extent, usability claims need to be assessed because of uncritical hyping and futurism, claims on (moral) desirability more likely need to be assessed because of unjustified conservatism. An assessment of expectations and visions demands serious reflection on potential techno-moral change, rather than an uncritical acceptance of an allegedly shared current morality.

The ethicist should aim to make stakeholders more sensitive to the novelty-creating role of technology, by exploring the unexpected moral changes that the new technology might bring about. This can be done in a grounded way by building on instances of techno-moral change that have occurred in the past. We are not suggesting, of course, that history repeats itself and that the imagined future simply is an updated version of past events. However, history does provide examples from which we can draw patterns of the mutual shaping of technology and morality that are an important ingredient for fictive, and yet grounded, reflections on future techno-moral change.

Ethics of NEST as Broadening: Between Plausibility and Moral Imagination

So far, we have elaborated on Nordmann’s and Rip’s plea for a critical assessment of expectations with regard to emerging technologies. We agree with them that ethical reflection on emerging technologies should be based on plausible expectations with regard to what these technologies might do in society. We argued that an assessment of techno-scientific expectations on emerging technologies should concern not only their technical possibility, but also the expected social usability of the technology and the expected (moral) desirability. In pointing out possible strategies and conceptual tools for such an assessment, we observed that assessing claims on these three dimensions often leads to divergent conclusions. Whereas claims of technological feasibility and societal acceptance may often be too speculative (as [20] already pointed out), we argued that by contrast claims of moral desirability as a rule are insufficiently speculative. Therefore, an ethicist who wants not just to assess but also to help improve the quality of expectations will have to play very different roles. We will elaborate this in the next section. Here, we want to reflect on the basis for these diagnoses. How exactly can ethicists observe a lack or an overflow of plausibility? Which perspective is implied in such assessments?

The concept of “plausibility” is inherently intersubjective: a statement is plausible when it makes sense to a specific audience. Critics may wonder, then, whether it is useful or even possible to check or contest the plausibility of claims from a standpoint that is external to this intended audience. If the audience is decisive, doesn’t this imply that the validity of judgments on plausibility is context dependent? A lay audience may judge claims as plausible that seem utterly implausible to a group of scientific experts, and even different groups of experts may have very different views on what is plausible. So how can we (and other ethicists) criticize technological expectations for being too futuristic and moral expectations for being too conservative? To clarify our position it is useful to refer once more to the tradition of rhetoric.

The definition of “plausible” referred to in “The Rhetoric Character of Expectations” (in this article) includes two aspects: approbation and appearance. An audience’s approbation of a claim will usually depend on an audience’s perception that the claim appears to be right. Well-known and familiar statements will receive approbation more easily than new and unexpected ones. Statements clearly linked to an audience’s sphere of understanding are more likely to trigger suspension of disbelief. This seems true both for lay audiences and for scientific and technological experts. Then it is safe to claim that statements on what emerging technologies will bring about will usually be judged against a background of existing views on technology, society and morality.

This situatedness of judgments on the plausibility of expectations, however, does not mean that we have to accept all judgments at face value. On the contrary, by becoming aware of the contextual aspects and by comparing divergent views, we can also point out (some of) the assumptions that characterize an audience’s epistemic and moral background and analyze them critically. In this way, we can become more aware of the limitations of some visions. A lay audience may not be in a good position to judge claims of technological feasibility. An engineer, on the other hand, may hardly have the background knowledge to judge claims on behalf of societal plausibility. And both may lack the knowledge to assess claims on behalf of the potential interaction between technology and morality. However, all these perspectives are needed to improve the epistemological conditions for a normative reflection on the desirability of on emerging technologies.

In order to avoid ‘mere speculations’ in the normative debate on emerging technologies, the context in which expectations are assessed should be broadened in two respects. The first is what we might call a horizontal broadening: by including different sources of information as well as different stakeholders, both the number of people and the background knowledge against which to judge certain claims may be extended. This type of broadening is also aimed for in “participatory” or “constructive” technology assessment and ethicists can learn a lot from these endeavors [23]. The second could be called a vertical broadening: by feeding the discussions and assessments with historical knowledge, the plausibility of claims on behalf of emerging technologies can be grounded in experience, at least to some extent. This historical knowledge may relate to past cases of emerging technologies, evolutions of social practices, cases of users’ “unexpected” behaviors, patterns of ethical controversies and techno-moral change.

This historical perspective undergirds our conclusions with regard to technological and social versus moral plausibility. Claims on behalf of technological feasibility seem to neglect that the path of history is strewn with expectations that failed to come true. Likewise, claims on behalf of societal plausibility seem to ignore the many ways in which things ‘bit back’ in the past [29]: technologies developed in the lab for a particular kind of aim and a particular type of setting may serve very different aims and functions in completely different settings when used in the ‘real world’. Claims on behalf of moral plausibility, on the other hand, tend to neglect the many examples of moral change resulting from technological development. Again, such historical insights cannot be used as unambiguous evidence to ‘prove’ the (im)plausibility of a specific claim with regard to the future. They may help, nonetheless, to correct one-sided views of the future and they can suggest which factors prevented or accelerated change in the past.

This is exactly where the ethicist may play a mediating and facilitating role between science and society in processes of deliberation on emerging technologies. By exploring expectations of an emerging technology in different contexts, ethicists can temporarily step out of the situated judgments of plausibility, analyze controversies, and scrutinize epistemic and normative assumptions. By doing this, additional information—about technical challenges, social dynamics, users’ creativity, value frameworks and techno-moral changes—might be brought into the debate and broadening the background knowledge of the lay audience. In this respect, the role of the ethicist is to stretch the lay audience’s imagination, showing that what seemed plausible at first glance may be less so when we look at it more closely, and conversely that what seemed implausible at first glance might appear more likely when we take into account other perspectives and knowledge. The initial imaginative boundaries are, therefore, redrawn by altering the assumed shared knowledge and information. This provides all stakeholders with a better position to democratically deliberate on emerging technologies.

Some Tools for Ethics of Emerging Technologies

We started with the observation that debates on the desirability of emerging technologies by definition deal with an elusive object. Because emerging technologies to a large extent consist of expectations, visions, and promises, ethicists who want to facilitate debate on the desirability of such emerging technologies should first assess the quality of the claims that compose an expectation and explore their content. Since these claims are multifaceted, this assessment requires a broad set of different activities. It is not our aim here to present a full-fledged methodology. However, the preceding analysis suggests at least some tools that could be used to assess expectations and thus to avoid the twin dangers of purely speculative AND unimaginative ethical reflection.

As far as “technical feasibility” is concerned, ethicists will have to critically assess the context in which specific claims on behalf of emerging technologies are made and reinterpret them in view of this context. They also should make sure that a number of experts are consulted and confronted with each other. And finally, they can actively search for situations in which expectations on the technology can be analyzed more in depth, for example by going to the lab or the R&D department. Here, ethicists could investigate the history of a particular technology as well as the uncertainties and challenges at stake in current developments. All this is necessary to imagine more in detail whether and how the final technological product is expected to work and to rule out at least some expectations on potential uses and applications as too futuristic.

Since engineers’ considerations in making technical decisions partially depend on their ideas about the social use of the final product, this first step should be followed by an assessment of the “social usability” of their expectations. Here, the ethicist should aim for ‘thick descriptions’ of the practice in which the technology is supposed to be embedded. She should ensure that visions of a device are translated into visions of technological practices, for example by analyzing the ‘fictive scripts’ implicit in the design. Subsequently, this envisioned practice should be discussed with potential users and other stakeholders. CTA interactions and workshops are good tools to explore future developments of technology in society, grounding such visions on stakeholders’ situated expertise and learning of techno-social dynamics. Here, again, historical knowledge of how the current practice evolved and which factors were crucial in that evolution might help to judge the likelihood of specific changes.

Finally, the ethicist should critically appraise the claims on the desirability of the technology. Rather than using those claims on benefit as a starting point for passing ethical judgment, the ethicist should assess the assumptions on which they seem plausible. Such claims presuppose an allegedly shared morality; the ethicist’s task now is to look for potential controversies on the desirability of the technology at hand. By making explicit which values and moral justifications are embedded in expectations and analyzing which values are implied by the current practices and ideals of different stakeholders, the ethicist can discover some not-yet-expressed ethical controversies. By using the historically informed NEST-ethics approach [26], it is also possible to systematically imagine which possible future controversies might emerge. In order to imagine how technology-society and morality can interact in unexpected ways, we propose to consider the above-mentioned ‘thick descriptions’ in the perspective of the lessons about how users’ creativity often surprises designers’ visions and makes unexpected (and seemingly implausible) scenarios become real. Finally, ethicist might draw from the history of techno-moral change. Patterns observed in the mutual shaping of technology and morality in the past may be used to construct scenarios of future changes.

This set of tools should prepare the ethicist of emerging technologies to deal with this elusive object without yielding to futuristic or conservative claims. These tools should create the conditions for a debate on emerging technologies in which all relevant stakeholders can participate and discuss grounded, and yet unexpected futures.

Conclusion

Nordmann and Rip correctly observed that ethics goes along with the hypes and hopes of emerging technologies far too easily. Skeptics might conclude from this observation that it does not make sense to assess the potential ethical implications of emerging technologies, because it is just too early to tell what their impact will be. In this paper, we have argued that it is possible to perform an early “ethical” assessment of emerging technologies that is both grounded and imaginative. We have articulated an approach in which the quality of techno-scientific promises can be assessed in a comprehensive way in which not only the claims on technical feasibility, but also the claims on usability and desirability are analyzed, questioned and broadened. The role of the ethicist is to analyze the judgments of plausibility of techno-scientific promises, and to point out internal controversies that can ground the discourse on emerging technologies on concrete technological and social practices. The ethicist contributes to the quality of expectations by exploring the social and human context in which the technology is expected to operate, and by integrating lessons on moral-change that may be learnt from philosophy and history of technology. Such interventions can broaden both the setting and the content of ethical deliberations on emerging technologies. From this broadened epistemic background, society can re-frame plausibility judgments on emerging technologies and ground the debate on their desirability on a constructive moral imagination.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Footnotes

1For brevity’s sake, from now on we will use interchangeably the generic concepts “expectation” or “promise” to refer to all different forms of ‘presenting future new and emerging technologies’, like visions, warnings, etc.

2Armin Grunwald, in a response to Nordmann and Rip’s article [11], claims that they confuse ‘explorative nanophilosophy’ with ‘applied ethics’. Though he rejects the notion of ‘reality check’ as misleading in this context, Grunwald acknowledges ‘the need for developing criteria and procedures for better being able to distinguish between mere speculations and more plausible futures’(p. 95). Our article can be considered as a further investigation of how an explorative philosophy of nanotechnology (or new and emerging technologies, in general) can critically appraise technology-based visions of the future.

3[30] points out that these levels are interconnected and that the related statements, agendas and actions are “nested” (ibi, p 184)

4According to the Oxford English Dictionary, the primary meaning of ‘plausible’ is ‘acceptable, agreeable, pleasing, gratifying; winning public approval, popular’ or ‘of an argument, an idea, a statement, etc.…: seeming reasonable, probable or truthful; convincing, believable’.

5“Philips and bioMérieux announce partnership to develop and market next-generation handheld diagnostic solutions for point-of-care.” (n.d.). Retrieved January 18, 2011, from http://www.business-sites.philips.com/sites/philipsbs/magnotech/press/20100107-partnership.page.

6University of Twente (2010, February 17). “Reliable home male fertility test? Accurate sperm counts now possible”. ScienceDaily. Retrieved January 18, 2011, from http://www.sciencedaily.com/releases/2010/02/100216221151.htm#

7“Concentration is not the only indicator of sperm quality. Spontaneous activity—also known as motility—and the shape of the spermatozoa are also important factors. Further research will need to establish whether these two quality characteristics can be measured in a similar manner, so that a compact device can be developed in which a chip can be inserted for single use.” University of Twente (2010, February 17). “Reliable home male fertility test? Accurate sperm counts now possible”. ScienceDaily. Retrieved January 18, 2011, from http://www.sciencedaily.com/releases/2010/02/100216221151.htm#

8“It is important that the count distinguishes between spermatozoa and other particles or cells in the fluid: if other particles are included the count will be unreliable. Segerink added minuscule balls to the fluid to test its selectivity. The method proved to be selective enough to distinguish between the balls and the spermatozoa.” (Ibidem)

9University of Twente (2010, February 17). “Reliable home male fertility test? Accurate sperm counts now possible”. ScienceDaily. Retrieved January 18, 2011, from http://www.sciencedaily.com/releases/2010/02/100216221151.htm#

10We borrow the term “thick description” from the field of anthropology; in future oriented studies they can also be called “scenarios”.

References

1. Akrich M. The description of technological objects. In: Bijker W, Law J, editors. Shaping technology building society: studies in sociotechnical change. Cambridge: MIT Press; 1992. [Google Scholar]
2. Boenink M (2010) Molecular medicine and concepts of disease: the ethical value of a conceptual analysis of emerging biomedical technologies. Med Healthc Philos 13(1):11–23. Springer Netherlands. Retrieved from 10.1007/s11019-009-9223-x [Europe PMC free article] [Abstract]
3. Borup M, Brown N, Konrad K, Van Lente H. The sociology of expectations in science and technology. Technol Anal Strat Manag. 2006;18(3–4):285–298. 10.1080/09537320600777002. [CrossRef] [Google Scholar]
4. Brown N, Michael M (2003) A sociology of expectations: retrospecting prospects and prospecting retrospects. Tech Anal Strat Manag 15:3–18. Retrieved from http://www.ingentaconnect.com/content/routledg/ctas/2003/00000015/00000001/art00001.
5. Collingridge D (1980) The social control of technology. Open University Press
6. De Laat B. Scripts for the future: using innovation studies to design foresight tools. In: Brown N, Rappert B, Webster A, editors. Contested futures: a sociology of prospective techno-science. Aldershot: Ashgate; 2000. [Google Scholar]
7. Deyo RA, Patrick DL (2005) Hope or hype the obsession with medical advances and the high cost of false promises. New York: AMACOM, American Management Association. Retrieved from http://www.netlibrary.com/urlapi.asp?action=summary&v=1&bookid=117775
8. Fisher E, Mahajan RL, Mitcham C. Midstream modulation of technology: governance from within. Bull Sci Technol Soc. 2006;26(6):485–496. 10.1177/0270467606295402. [CrossRef] [Google Scholar]
9. Gilbert G, Mulkay MJ. Opening Pandora’s box : an analysis of scientists’ discourse. Cambridge: Cambridge University Press; 1984. [Google Scholar]
10. Grin J, Grunwald A (2000) Vision assessment: shaping technology in 21st century society towards a repertoire for technology assessment. Springer
11. Grunwald A (2010) From speculative nanoethics to explorative philosophy of nanotechnology. NanoEthics 4(2):91–101. Springer Netherlands. Retrieved from 10.1007/s11569-010-0088-5
12. Guston D. Real-time technology assessment. Technol Soc. 2002;24(1–2):93–109. 10.1016/S0160-791X(01)00047-1. [CrossRef] [Google Scholar]
13. Keulartz J, Schermer M, Korthals M, Swierstra T (2004) Pragmatist ethics for a technological culture. Kluwer Academic Publishers [Abstract]
14. Latour B, Woolgar S (1979) The social construction of scientific facts. Beverly Hills u.a.
15. Lie M, Sørensen KH. Making technology our own?: domesticating technology into everyday life. Oslo: Scandinavian University Press; 1996. [Google Scholar]
16. Mol A (2000) What diagnostic devices do: the case of blood sugar measurement. Theor Med Bioeth 21(1):9–22. Retrieved from 10.1023/A:1009999119586 [Abstract]
17. Mol A. The body multiple: ontology in medical practice. Durham: Duke University Press; 2002. [Google Scholar]
18. Nordmann A. Knots and strands: an argument for productive disillusionment. J Med Philos. 2007;32(3):217–36. 10.1080/03605310701396976. [Abstract] [CrossRef] [Google Scholar]
19. Nordmann A. If and then: a critique of speculative nanoethics. NanoEthics. 2007;1(1):31–46. 10.1007/s11569-007-0007-6. [CrossRef] [Google Scholar]
20. Nordmann A, Rip A. Mind the gap revisited. Nat Nanotechnol. 2009;4(5):273–274. 10.1038/nnano.2009.26. [Abstract] [CrossRef] [Google Scholar]
21. Oudshoorn N, Pinch TJ. How users matter: the co-construction of users and technologies. Cambridge: MIT Press; 2003. [Google Scholar]
22. Perelman C, Olbrechts-Tyteca L (1969) The new rhetoric: a treatise on argumentation. Notre Dame, [Ind.]: University of Notre Dame Press
23. Rip A, Kulve H. Constructive technology assessment and socio-technical scenarios. Nanotechnology. 2005;1:49–70. [Google Scholar]
24. Rip A, Misa TJ, Schot J (1995) Managing technology in society: the approach of constructive technology assessment. Pinter, London-New York
25. Sung J, Hopkins M (2006) Towards a method for evaluating technological expectations: revealing uncertainty in gene silencing technology discourse. Technology Analysis & Strategic Management 18(3/4):345–359. Routledge. Retrieved from 10.1080/09537320600777119
26. Swierstra T, Rip A (2007) Nano-ethics as NEST-ethics: patterns of moral argumentation about new and emerging science and technology. NanoEthics 1(1):3–20. Retrieved from http://www.scopus.com/scopus/inward/record.url?eid=2-s2.0-34547118330&partnerID=40&rel=R7.0.0
27. Swierstra T, Stemerding D, Boenink M (2009b). Exploring techno-moral change: the case of the obesitypill. In: Sollie P, Düwell M (eds) Evaluating new technologies (Vol. 3, pp. 119–138). Springer Netherlands. Retrieved from 10.1007/978-90-481-2229-5_9
28. Swierstra T, van Est R, Boenink M (2009a) Taking care of the symbolic order. How converging technologies challenge our concepts. NanoEthics 3(3):269–280. Springer Netherlands. Retrieved from 10.1007/s11569-009-0080-0 [Europe PMC free article] [Abstract]
29. Tenner E. Why things bite back: technology and the revenge of unintended consequences. New York: Knopf; 1996. [Google Scholar]
30. van Lente H (1993). Promising technology: the dynamics of expectations in technological developments. Universiteit Twente, Faculteit Wijsbegeerte en Maatschappijwetenschappen
31. Verbeek P (2005) What things do: philosophical reflections on technology, agency, and design. University Park, Pa. Pennsylvania State University Press

Citations & impact 


Impact metrics

Jump to Citations

Citations of article over time

Alternative metrics

Altmetric item for https://www.altmetric.com/details/4904194
Altmetric
Discover the attention surrounding your research
https://www.altmetric.com/details/4904194

Smart citations by scite.ai
Smart citations by scite.ai include citation statements extracted from the full text of the citing article. The number of the statements may be higher than the number of citations provided by EuropePMC if one paper cites another multiple times or lower if scite has not yet processed some of the citing articles.
Explore citation contexts and check if this article has been supported or disputed.
https://scite.ai/reports/10.1007/s11569-011-0119-x

Supporting
Mentioning
Contrasting
0
33
0

Article citations


Go to all (10) article citations

Similar Articles 


To arrive at the top five similar articles we use a word-weighted algorithm to compare words from the Title and Abstract of each citation.