This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...) attention to the issues of transparency and explainability. In contrast to standard discussions, however, it is then argued that this knowledge problem regarding agents of responsibility is linked to the other side of the responsibility relation: the addressees or “patients” of responsibility, who may demand reasons for actions and decisions made by using AI. Inspired by a relational approach, responsibility as answerability thus offers an important additional, if not primary, justification for explainability based, not on agency, but on patiency. (shrink)
Machine generated contents note: -- Acknowledgements -- Introduction - The Problem of Moral Status -- PART I: MORAL ONTOLOGIES: FROM INDIVIDUAL TO RELATIONAL DOGMAS -- Individual Properties -- Appearance and Virtue -- Relations: Communitarian and Metaphysical -- Relations: Natural and Social -- Relations: Hybrid and Environmental -- Conclusion Part I: Diogenes's Challenge -- PART II: MORAL STATUS ASCRIPTION AND ITS CONDITIONS OF POSSIBILITY: A TRANSCENDENTAL ARGUMENT -- Words and Sentences: Forms of Language Use -- Societies and Cultures (1): Forms of (...) Living Together -- Societies and Cultures (2): Forms of Life -- Bodies and Things: Forms of Feeling and Making -- Spirits and Gods: Forms of Religion -- Fences, Walls, and Maps: Forms of Historical Space -- Moral Metamorphosis: Concluding the Transcendental Argument -- General Conclusion -- References -- Index. (shrink)
Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration (...) to some intelligent social robots: it sketches a novel argument for moral consideration based on social relations. It is shown that to further develop this argument we need to revise our existing ontological and social-political frameworks. It is suggested that we need a social ecology, which may be developed by engaging with Western ecology and Eastern worldviews. Although this relational turn raises many difficult issues and requires more work, this paper provides a rough outline of an alternative approach to moral consideration that can assist us in shaping our relations to intelligent robots and, by extension, to all artificial and biological entities that appear to us as more than instruments for our human purposes. (shrink)
This book offers a systematic framework for thinking about the relationship between language and technology and an argument for interweaving thinking about technology with thinking about language. The main claim of philosophy of technology—that technologies are not mere tools and artefacts not mere things, but crucially and significantly shape what we perceive, do, and are—is re-thought in a way that accounts for the role of language in human technological experiences and practices. Engaging with work by Wittgenstein, Heidegger, McLuhan, Searle, Ihde, (...) Latour, Ricoeur, and many others, the author critically responds to, and constructs a synthesis of, three "extreme", idealtype, untenable positions: only humans speak and neither language nor technologies speak, only language speaks and neither humans nor technologies speak, and only technology speaks and neither humans nor language speak. The construction of this synthesis goes hand in hand with a narrative about subjects and objects that become entangled and constitute one another. Using Words and Things thus draws in central discussions from other subdisciplines in philosophy, such as philosophy of language, epistemology, and metaphysics, to offer an original theory of the relationship between language and technology centered on use, performance, and narrative, and taking a transcendental turn. (shrink)
Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that (...) in the future we might nevertheless be able to build quasi-moral robots that can learn to create the appearance of emotions and the appearance of being fully moral. I will also argue that this way of drawing robots into our social-moral world is less problematic than it might first seem, since human morality also relies on such appearances. (shrink)
Should we give moral standing to machines? In this paper, I explore the implications of a relational approach to moral standing for thinking about machines, in particular autonomous, intelligent robots. I show how my version of this approach, which focuses on moral relations and on the conditions of possibility of moral status ascription, provides a way to take critical distance from what I call the “standard” approach to thinking about moral status and moral standing, which is based on properties. It (...) does not only overcome epistemological problems with the standard approach, but can also explain how we think about, experience, and act towards machines—including the gap that sometimes occurs between reasoning and experience. I also articulate the non-Cartesian orientation of my “relational” research program and specify the way it contributes to a different paradigm in thinking about moral standing and moral knowledge. (shrink)
Scenarios involving the introduction of artificially intelligent (AI) assistive technologies in health care practices raise several ethical issues. In this paper, I discuss four objections to introducing AI assistive technologies in health care practices as replacements of human care. I analyse them as demands for felt care, good care, private care, and real care. I argue that although these objections cannot stand as good reasons for a general and a priori rejection of AI assistive technologies as such or as replacements (...) of human care, they demand us to clarify what is at stake, to develop more comprehensive criteria for good care, and to rethink existing practices of care. In response to these challenges, I propose a (modified) capabilities approach to care and emphasize the inherent social dimension of care. I also discuss the demand for real care by introducing the ‘Care Experience Machine’ thought experiment. I conclude that if we set the standards of care too high when evaluating the introduction of AI assistive technologies in health care, we have to reject many of our existing, low-tech health care practices. (shrink)
Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis on individual choice and control than the contractarian-individualist approach. In addition, the argument is made that while robots are neither human (...) nor mere tools, we have sufficient functional, agency-based, appearance-based, social-relational, and existential criteria left to evaluate trust in robots. It is also argued that such evaluations must be sensitive to cultural differences, which impact on how we interpret the criteria and how we think of trust in robots. Finally, it is suggested that when it comes to shaping conditions under which humans can trust robots, fine-tuning human expectations and robotic appearances is advisable. (shrink)
This paper tries to understand the phenomenon that humans are able to empathize with robots and the intuition that there might be something wrong with “abusing” robots by discussing the question regarding the moral standing of robots. After a review of some relevant work in empirical psychology and a discussion of the ethics of empathizing with robots, a philosophical argument concerning the moral standing of robots is made that questions distant and uncritical moral reasoning about entities’ properties and that recommends (...) first trying to understand the issue by means of philosophical and artistic work that shows how ethics is always relational and historical, and that highlights the importance of language and appearance in moral reasoning and moral psychology. It is concluded that attention to relationality and to verbal and non-verbal languages of suffering is key to understand the phenomenon under investigation, and that in robot ethics we need less certainty and more caution and patience when it comes to thinking about moral standing. (shrink)
Today it is widely recognized that we face urgent and serious environmental problems and we know much about them, yet we do very little. What explains this lack of motivation and change? Why is it so hard to change our lives? This book addresses this question by means of a philosophical inquiry into the conditions of possibility for environmental change. It discusses how we can become more motivated to do environmental good and what kind of knowledge we need for this, (...) and explores the relations between motivation, knowledge, and modernity. After reviewing a broad range of possible philosophical and psychological responses to environmental apathy and inertia, the author argues for moving away from a modern focus on either detached reason and control or the natural, the sentiments, and the authentic, both of which make possible disengaging and alienating modes of relating to our environment. Instead he develops the notion of environmental skill: a concept that bridges the gap between knowledge and action, re-interprets environmental virtue, and suggests an environmental ethics centered on experience, know-how and skillful engagement with our environment. The author then explores the implications of this ethics for our lives: it changes the way we think about, and deal with, health, food, animals, energy, climate change, politics, and technology. (shrink)
In this essay we reflect critically on how animal ethics, and in particular thinking about moral standing, is currently configured. Starting from the work of two influential “analytic” thinkers in this field, Peter Singer and Tom Regan, we examine some basic assumptions shared by these positions and demonstrate their conceptual failings—ones that have, despite efforts to the contrary, the general effect of marginalizing and excluding others. Inspired by the so-called “continental” philosophical tradition , we then argue that what is needed (...) is a change in the rules of the game, a change of the question. We alter the normative question from “What properties does the animal have?” to “What are the conditions under which an entity becomes a moral subject?” This leads us to consider the role of language, personal relations, and material-technological contexts. What is needed then in response to the moral standing problem, is not more of the same—yet another, more refined criterion and argumentation concerning moral standing, or a “final” rational argumentation that would be able to settle the animal question once and for all—but a turning or transformation in both our thinking about and our relations to animals, through language, through technology, and through the various place-ordering practices in which we participate. (shrink)
How can we best identify, understand, and deal with ethical and societal issues raised by healthcare robotics? This paper argues that next to ethical analysis, classic technology assessment, and philosophical speculation we need forms of reflection, dialogue, and experiment that come, quite literally, much closer to innovation practices and contexts of use. The authors discuss a number of ways how to achieve that. Informed by their experience with “embedded” ethics in technical projects and with various tools and methods of responsible (...) research and innovation, the paper identifies “internal” and “external” forms of dialogical research and innovation, reflections on the possibilities and limitations of these forms of ethical–technological innovation, and explores a number of ways how they can be supported by policy at national and supranational level. (shrink)
In this paper, we engage in a philosophical investigation of how blockchain technologies such as cryptocurrencies can mediate our social world. Emerging blockchain-based decentralised applications have the potential to transform our financial system, our bureaucracies and models of governance. We construct an ontological framework of “narrative technologies” that allows us to show how these technologies, like texts, can configure our social reality. Drawing from the work of Ricoeur and responding to the works of Searle, in postphenomenology and STS, we show (...) how blockchain technologies bring about a process of emplotment: an organisation of characters and events. First, we show how blockchain technologies actively configure plots such as financial transactions by rendering them increasingly rigid. Secondly, we show how they configure abstractions from the world of action, by replacing human interactions with automated code. Third, we investigate the role of people’s interpretative distances towards blockchain technologies: discussing the importance of greater public involvement with their application in different realms of social life. (shrink)
In the philosophy of technology after the empirical turn, little attention has been paid to language and its relation to technology. In this programmatic and explorative paper, it is proposed to use the later Wittgenstein, not only to pay more attention to language use in philosophy of technology, but also to rethink technology itself—at least technology in its aspect of tool, technology-in-use. This is done by outlining a working account of Wittgenstein’s view of language and by then applying that account (...) to technology—turning around Wittgenstein’s metaphor of the toolbox. Using Wittgenstein’s concepts of language games and form of life and coining the term ‘technology games’, the paper proposes and argues for a use-oriented, holistic, transcendental, social, and historical approach to technology which is empirically but also normatively sensitive, and which takes into account implicit knowledge and know-how. It gives examples of interaction with social robots to support the relevance of this project for understanding and evaluating today’s technologies, makes comparisons with authors in philosophy of technology such as Winner and Ihde, and sketches the contours of a phenomenology and hermeneutics of technology use that may help us to understand but also to gain a more critical relation to specific uses of concrete technologies in everyday contexts. Ultimately, given the holism argued for, it also promises a more critical relation to the games and forms of life technologies are embedded in—to the ways we do things. (shrink)
Postphenomenology and posthermeneutics as initiated by Ihde have made important contributions to conceptualizing understanding human–technology relations. However, their focus on individual perception, artifacts, and static embodiment has its limitations when it comes to understanding the embodied use of technology as involving bodily movement, social, and taking place within, and configuring, a temporal horizon. To account for these dimensions of experience, action, and existence with technology, this paper proposes to use a conceptual framework based on performance metaphors. Drawing on metaphors from (...) three performance arts—dance, theatre, and music—and giving examples from social media and other technologies, it is shown that we can helpfully describe technology use and experience as performance involving movement, sociality, and temporality. Moreover, it is argued that these metaphors can also be used to reformulate the idea that in such uses and experiences, now understood as “technoperformances”, technology is not merely a tool but also takes on a stronger, often non-intended role: not so much as “mediator” but as choreographer, director, and conductor of what we experience and do. Performance metaphors thus allow us to recast the phenomenology and hermeneutics of technology use as moving, social, and temporal—indeed historical—affair in which technologies take on the role of organizer and structurer of our performances, and in which humans are not necessarily the ones who are fully in control of the meanings, experiences, and actions that emerge from our engagement with the world, with technology, and with each other. This promises to give us a more comprehensive view of what it means to live with technology and how our lives are increasingly organized by technology—especially by smart technologies. Finally, it is argued that this has normative implications for an ethics and politics of technology, now understood as an ethics and politics of technoperformances. (shrink)
Contemporary philosophy of technology after the empirical turn has surprisingly little to say on the relation between language and technology. This essay describes this gap, offers a preliminary discussion of how language and technology may be related to show that there is a rich conceptual space to be gained, and begins to explore some ways in which the gap could be bridged by starting from within specific philosophical subfields and traditions. One route starts from philosophy of language (both ‘‘analytic’’ and (...) ‘‘continental’’: Searle and Heidegger) and discusses some potential implications for thinking about technology; another starts from artefact-oriented approaches in philosophy of technology and STS and shows that these approaches might helpfully be extended by theorizing relationships between language and techno- logical artefacts. The essay concludes by suggesting a research agenda, which invites more work on the relation between language and technology. (shrink)
Most accounts of responsibility focus on one type of responsibility, moral responsibility, or address one particular aspect of moral responsibility such as agency. This article outlines a broader framework to think about responsibility that includes causal responsibility, relational responsibility, and what I call “narrative responsibility” as a form of “hermeneutic responsibility”, connects these notions of responsibility with different kinds of knowledge, disciplines, and perspectives on human being, and shows how this framework is helpful for mapping and analysing how artificial intelligence (...) challenges human responsibility and sense-making in various ways. Mobilizing recent hermeneutic approaches to technology, the article argues that next to, and interwoven with, other types of responsibility such as moral responsibility, we also have narrative and hermeneutic responsibility—in general and for technology. For example, it is our task as humans to make sense of, with and, if necessary, against AI. While from a posthumanist point of view, technologies also contribute to sense-making, humans are the experiencers and bearers of responsibility and always remain in charge when it comes to this hermeneutic responsibility. Facing and working with a world of data, correlations, and probabilities, we are nevertheless condemned to make sense. Moreover, this also has a normative, sometimes even political aspect: acknowledging and embracing our hermeneutic responsibility is important if we want to avoid that our stories are written elsewhere—through technology. (shrink)
When is it ethically acceptable to use artificial agents in health care? This article articulates some criteria for good care and then discusses whether machines as artificial agents that take over care tasks meet these criteria. Particular attention is paid to intuitions about the meaning of ‘care’, ‘agency’, and ‘taking over’, but also to the care process as a labour process in a modern organizational and financial-economic context. It is argued that while there is in principle no objection to using (...) machines in medicine and health care, the idea of them functioning and appearing as ‘artificial agents’ is problematic and attends us to problems in human care which were already present before visions of machine care entered the stage. It is recommended that the discussion about care machines be connected to a broader discussion about the impact of technology on human relations in the context of modernity. (shrink)
Ethical reflection on drone fighting suggests that this practice does not only create physical distance, but also moral distance: far removed from one’s opponent, it becomes easier to kill. This paper discusses this thesis, frames it as a moral-epistemological problem, and explores the role of information technology in bridging and creating distance. Inspired by a broad range of conceptual and empirical resources including ethics of robotics, psychology, phenomenology, and media reports, it is first argued that drone fighting, like other long-range (...) fighting, creates epistemic and moral distance in so far as ‘screenfighting’ implies the disappearance of the vulnerable face and body of the opponent and thus removes moral-psychological barriers to killing. However, the paper also shows that this influence is at least weakened by current surveillance technologies, which make possible a kind of ‘empathic bridging’ by which the fighter’s opponent on the ground is re-humanized, re-faced, and re-embodied. This ‘mutation’ or unintended ‘hacking’ of the practice is a problem for drone pilots and for those who order them to kill, but revealing its moral-epistemic possibilities opens up new avenues for imagining morally better ways of technology-mediated fighting. (shrink)
The use of robots in therapy for children with autism spectrum disorder raises issues concerning the ethical and social acceptability of this technology and, more generally, about human–robot interaction. However, usually philosophical papers on the ethics of human–robot-interaction do not take into account stakeholders’ views; yet it is important to involve stakeholders in order to render the research responsive to concerns within the autism and autism therapy community. To support responsible research and innovation in this field, this paper identifies a (...) range of ethical, social and therapeutic concerns, and presents and discusses the results of an exploratory survey that investigated these issues and explored stakeholders’ expectations about this kind of therapy. We conclude that although in general stakeholders approve of using robots in therapy for children with ASD, it is wise to avoid replacing therapists by robots and to develop and use robots that have what we call supervised autonomy. This is likely to create more trust among stakeholders and improve the quality of the therapy. Moreover, our research suggests that issues concerning the appearance of the robot need to be adequately dealt with by the researchers and therapists. For instance, our survey suggests that zoomorphic robots may be less problematic than robots that look too much like humans. (shrink)
Contemporary health care relies on electronic devices. These technologies are not ethically neutral but change the practice of care. In light of Sennett's work and that of other thinkers one worry is that "e-care"aEuro"care by means of new information and communication technologies-does not promote skilful and careful engagement with patients and hence is neither conducive to the quality of care nor to the virtues of the care worker. Attending to the kinds of knowledge involved in care work and their moral (...) significance, this paper explores what "craftsmanship" means in the context of medicine and health care and discusses whether today the care giver's craftsmanship is eroded. It is argued that this is a real danger, especially under modern conditions and in the case of telecare, but that whether it happens, and to what extent it happens, depends on whether in a specific practice and given a specific technology e-carers can develop the know-how and skill to engage more intensely with those under their care and to cooperate with their co-workers. (shrink)
What does it mean to say that imagination plays a role in moral reasoning, and what are the theoretical and practical implications? Engaging with three traditions in moral theory and confronting them with three contexts of moral practice, this book offers a more comprehensive framework to think about these questions. The author develops an argument about the relation between imagination and principles that moves beyond competition metaphors and center-periphery schemas. He shows that both cooperate and are equally necessary to cope (...) with moral problems, and combines insights of different theories and disciplines to explore how this works in practice. (shrink)
The discourse concerning computer ethics qualifies as a reference discourse for ethics-related IS research. Theories, topics and approaches of computer ethics are reflected in IS. The paper argues that there is currently a broader development in the area of research governance, which is referred to as 'responsible research and innovation'. RRI applied to information and communication technology addresses some of the limitations of computer ethics and points toward a broader approach to the governance of science, technology and innovation. Taking this (...) development into account will help IS increase its relevance and make optimal use of its established strengths. 2014 The Authors. Published by Elsevier B.V. All rights reserved. (shrink)
In this paper, we engage in a philosophical investigation of how blockchain technologies such as cryptocurrencies can mediate our social world. Emerging blockchain-based decentralised applications have the potential to transform our financial system, our bureaucracies and models of governance. We construct an ontological framework of “narrative technologies” that allows us to show how these technologies, like texts, can configure our social reality. Drawing from the work of Ricoeur and responding to the works of Searle, in postphenomenology and STS, we show (...) how blockchain technologies bring about a process of emplotment: an organisation of characters and events. First, we show how blockchain technologies actively configure plots such as financial transactions by rendering them increasingly rigid. Secondly, we show how they configure abstractions from the world of action, by replacing human interactions with automated code. Third, we investigate the role of people’s interpretative distances towards blockchain technologies: discussing the importance of greater public involvement with their application in different realms of social life. (shrink)
Nussbaum’s version of the capability approach is not only a helpful approach to development problems but can also be employed as a general ethical-anthropological framework in ‘advanced’ societies. This paper explores its normative force for evaluating information technologies, with a particular focus on the issue of human enhancement. It suggests that the capability approach can be a useful way of to specify a workable and adequate level of analysis in human enhancement discussions, but argues that any interpretation of what these (...) capabilities mean is itself dependent on (interpretations of) the techno-human practices under discussion. This challenges the capability approach’s means-end dualism concerning the relation between on the one hand technology and on the other hand humans and capabilities. It is argued that instead of facing a choice between development and enhancement, we better reflect on how we want to shape human-technological practices, for instance by using the language of capabilities. For this purpose, we have to engage in a cumbersome hermeneutics that interprets dynamic relations between unstable capabilities, technologies, practices, and values. This requires us to modify the capability approach by highlighting and interpreting its interpretative dimension. (shrink)
How can we make sense of the idea of ‘personal’ or ‘social’ relations with robots? Starting from a social and phenomenological approach to human–robot relations, this paper explores how we can better understand and evaluate these relations by attending to the ways our conscious experience of the robot and the human–robot relation is mediated by language. It is argued that our talk about and to robots is not a mere representation of an objective robotic or social-interactive reality, but rather interprets (...) and co-shapes our relation to these artificial quasi-others. Our use of language also changes as a result of our experiences and practices. This happens when people start talking to robots. In addition, this paper responds to the ethical objection that talking to and with robots is both unreal and deceptive. It is concluded that in order to give meaning to human–robot relations, to arrive at a more balanced ethical judgment, and to reflect on our current form of life, we should complement existing objective-scientific methodologies of social robotics and interaction studies with interpretations of the words, conversations, and stories in and about human–robot relations. (shrink)
Contemporary philosophy of technology, in particular mediation theory, has largely neglected language and has paid little attention to the social-linguistic environment in which technologies are used. In order to reintroduce and strengthen these two missing aspects we turn towards Ricoeur’s narrative theory. We argue that technologies have a narrative capacity: not only do humans make sense of technologies by means of narratives but technologies themselves co-constitute narratives and our understanding of these narratives by configuring characters and events in a meaningful (...) temporal whole. We propose a hermeneutic framework that enables us to categorise and interpret technologies according to two hermeneutic distinctions. Firstly, we consider the extent to which technologies close in on the paradigm of the written text by assessing their capacity to actively configure characters and events into a meaningful whole; thereby introducing a linguistic aspect in the theory of technological mediation. Secondly, we consider the extent to which technologieshave the capacity to abstract from the public narrative time of actual characters and events by constructing quasi-characters and quasi-events, thereby introducing the social in our conception of technological mediation. This leads us to the outlines of a theory of narrative technologies that revolves around four hermeneutic categories. In order to show the merits of this theory, we discuss the categories by analysing paradigmatic examples of narrative technologies: the bridge, the hydroelectric power plant, video games, and electronic money. (shrink)
The use of autonomous and intelligent personal social robots raises questions concerning their moral standing. Moving away from the discussion about direct moral standing and exploring the normative implications of a relational approach to moral standing, this paper offers four arguments that justify giving indirect moral standing to robots under specific conditions based on some of the ways humans—as social, feeling, playing, and doubting beings—relate to them. The analogy of “the Kantian dog” is used to assist reasoning about this. The (...) paper also discusses the implications of this approach for thinking about the moral standing of animals and humans, showing why, when, and how an indirect approach can also be helpful in these fields, and using Levinas and Dewey as sources of inspiration to discuss some challenges raised by this approach. (shrink)
Responding to long-standing warnings that robots and AI will enslave humans, I argue that the main problem we face is not that automation might turn us into slaves but, rather, that we remain masters. First I construct an argument concerning what I call ‘the tragedy of the master’: using the master–slave dialectic, I argue that automation technologies threaten to make us vulnerable, alienated, and automated masters. I elaborate the implications for power, knowledge, and experience. Then I critically discuss and question (...) this argument but also the very thinking in terms of masters and slaves, which fuels both arguments. I question the discourse about slavery and object to the assumptions made about human–technology relations. However, I also show that the discussion about masters and slaves attends us to issues with human–human relations, in particular to the social consequences of automation such as power issues and the problem of the relation between automation and (un)employment. Finally, I reflect on how we can respond to our predicament, to ‘the tragedy of the master’. (shrink)
As machines take over more tasks previously done by humans, artistic creation is also considered as a candidate to be automated. But, can machines create art? This paper offers a conceptual framework for a philosophical discussion of this question regarding the status of machine art and machine creativity. It breaks the main question down in three sub-questions, and then analyses each question in order to arrive at more precise problems with regard to machine art and machine creativity: What is art (...) creation? What do we mean by art? And, what do we mean by machines create art? This then provides criteria we can use to discuss the main question in relation to particular cases. In the course of the analysis, the paper engages with theory in aesthetics, refers to literature on computational creativity, and contributes to the philosophy of technology and philosophical anthropology by reflecting on the role of technology in art creation. It is shown that the distinctions between process versus outcome criteria and subjective versus objective criteria of creativity are unstable. It is also argued that we should consider non-human forms of creativity, and not only cases where either humans or machines create art but also collaborations between humans and machines, which makes us reflect on human-technology relations. Finally, the paper questions the very approach that seeks criteria and suggests that the artistic status of machines may be shown and revealed in the human/non-human encounter before any theorizing or agreement takes place; an experience which then is presupposed when we theorize. This hints at a more general model of what happens in artistic perception and engagement as a hybrid human-technological and emergent or even poetic process, a model which leaves more room for letting ourselves be surprised by creativity—human and perhaps non-human. (shrink)
The empirical turn, understood as a turn to the artifact in the work of Ihde, has been a fruitful one, which has rightly abandoned what Serres and Latour call “the empire of signs” of the postmoderns. However, this has unfortunately implied too little attention for language and its relation to technology. The same can be said about the social dimension of technology use, which is largely neglected in postphenomenology. This talk critically responds to Ihde and Stiegler, and sketches a Wittgensteinian (...) inroad to a more holistic and transcendental revision of postphenomenology which does not turn away from the artifact but places it in a wider social context and asks the question regarding the relation between language and technology. Finally, since the earth may be the ultimate condition of possibility, it is asked what this language-sensitive and transcendental approach may imply for rethinking our human position and agency in the Anthropocene. The paper ends with pointing to the role of language as transcendental condition that shapes the very project of thinking the “Anthropocene.”. (shrink)
The standard response to engineering disasters like the Deepwater Horizon case is to ascribe full moral responsibility to individuals and to collectives treated as individuals. However, this approach is inappropriate since concrete action and experience in engineering contexts seldom meets the criteria of our traditional moral theories. Technological action is often distributed rather than individual or collective, we lack full control of the technology and its consequences, and we lack knowledge and are uncertain about these consequences. In this paper, I (...) analyse these problems by employing Kierkegaardian notions of tragedy and moral responsibility in order to account for experiences of the tragic in technological action. I argue that ascription of responsibility in engineering contexts should be sensitive to personal experiences of lack of control, uncertainty, role conflicts, social dependence, and tragic choice. I conclude that this does not justify evading individual and corporate responsibility, but inspires practices of responsibility ascription that are less ‘harsh’ on those directly involved in technological action, that listen to their personal experiences, and that encourage them to gain more knowledge about what they are doing. (shrink)
The discussion about robots in elderly care is populated by doom scenarios about a totally dehumanized care system in which elderly people are taken care of by machines. Such scenarios are helpful as they attend us to what we think is important with regard to the quality elderly care. However, this article argues that they are misleading in so far as they (1) assume that deception in care is always morally unacceptable, (2) suggest that robots and other information technologies necessarily (...) deceive elderly people by creating a “virtual” world as opposed to a “real” world, (3) assume that elderly people of the future have similar ICT skills and interests as the elderly people of today, and (4) assume a simplistic view of technologies. The article suggests an approach to evaluating care robots and ICT in health care which acknowledges and addresses a number of ethical problems raised by robotic care—for instance disengagement problems—while taking into account that some elderly people may need care that does not treat them as (empirically) autonomous, that many the elderly of the future are likely to be digital natives, and that the task of evaluating technologies in care is more complicated than the discussion of deception suggests. (shrink)
A prima facie analysis suggests that there are essentially two, mutually exclusive, ways in which risk arising from engineering design can be managed: by imposing external constraints on engineers or by engendering their feelings of responsibility and respect their autonomy. The author discusses the advantages and disadvantages of both approaches. However, he then shows that this opposition is a false one and that there is no simple relation between regulation and autonomy. Furthermore, the author argues that the most pressing need (...) is not more or less regulation but the further development of moral imagination. The enhancement of moral imagination can help engineers to discern the moral relevance of design problems, to create new design options, and to envisage the possible outcomes of their designs. The author suggests a dual program of developing regulatory frameworks that support engineers’ autonomy and responsibility simultaneously with efforts to promote their moral imagination. He describes how some existing institutional changes have started off in this direction and proposes empirical research to take this further. (shrink)
Many philosophical and public discussions of the ethical aspects of violent computer games typically centre on the relation between playing violent videogames and its supposed direct consequences on violent behaviour. But such an approach rests on a controversial empirical claim, is often one-sided in the range of moral theories used, and remains on a general level with its focus on content alone. In response to these problems, I pick up Matt McCormick’s thesis that potential harm from playing computer games is (...) best construed as harm to one’s character, and propose to redirect our attention to the question how violent computer games influence the moral character of players. Inspired by the work of Martha Nussbaum, I sketch a positive account of how computer games can stimulate an empathetic and cosmopolitan moral development. Moreover, rather than making a general argument applicable to a wide spectrum of media, my concern is with specific features of violent computer games that make them especially morally problematic in terms of empathy and cosmopolitanism, features that have to do with the connections between content and medium, and between virtuality and reality. I also discuss some remaining problems. In this way I hope contribute to a less polarised discussion about computer games that does justice to the complexity of their moral dimension, and to offer an account that is helpful to designers, parents, and other stakeholders. (shrink)
Purpose This paper aims to fill this gap by providing a conceptual framework for discussing “technologies of the self and other,” by showing that, in most cases, self-tracking also involves other-tracking. Design/methodology/approach In so doing, we draw upon Foucault’s “technologies of the self” and present-day literature on self-tracking technologies. We elaborate on two cases and practical domains to illustrate and discuss this mutual process: first, the quantified workplace; and second, quantification by wearables in a non-clinical and self-initiated context. Findings The (...) main conclusion is that these shapings are never neutral and have ethical implications, such as regarding “quantified otherness,” a notion we propose to point at the risk that the other could become an object of examination and competition. Originality/value Although there is ample literature on the quantified self, considerably less attention is given to how the relation with the other is being shaped by self-tracking technologies that allow data sharing. (shrink)
Dreyfus’s work is widely known for its critique of artificial intelligence and still stands as an example of how to do excellent philosophical work that is at the same time relevant to contemporary technological and scientific developments. But for philosophers of technology, especially for those sympathetic to using Heidegger, Merleau-Ponty, and Wittgenstein as sources of inspiration, it has much more to offer. This paper outlines Dreyfus’s account of skillful coping and critically evaluates its potential for thinking about technology. First, it (...) is argued that his account of skillful coping can be developed into a general view about handling technology which gives due attention to know-how/implicit knowledge and embodiment. Then a number of outstanding challenges are identified that are difficult to cope with if one remains entirely within the world of Dreyfus’s writings. They concern questions regarding other conceptualizations of technology and human–technology relations, issues concerning how to conceptualize the social and the relation between skill, meaning, and practices, and the question about the ethical and political implications of his view, including how virtue and skill are related. Acknowledging some known discussions about Dreyfus’s work, but also drawing on other material and on the author’s previous writings, the paper suggests that to address these challenges and develop the account of skillful coping into a wider scoped, Dreyfus-inspired philosophy of technology, it could take more distance from Heidegger’s conceptions of technology and benefit from engagement with work in postphenomenology, pragmatism, the later Wittgenstein, and virtue ethics. (shrink)
In this brief article we reply to Michal Piekarski’s response to our article ‘Facing Animals’ published previously in this journal. In our article we criticized the properties approach to defining the moral standing of animals, and in its place proposed a relational and other-oriented concept that is based on a transcendental and phenomenological perspective, mainly inspired by Heidegger, Levinas, and Derrida. In this reply we question and problematize Piekarski’s interpretation of our essay and critically evaluate “the ethics of commitment” that (...) he offers as an alternative. (shrink)
This essay shows that a sharp distinction between ethics and aesthetics is unfruitful for thinking about how to live well with technologies, and in particular for understanding and evaluating how we cope with human existential vulnerability, which is crucially mediated by the development and use of technologies such as electronic ICTs. It is argued that vulnerability coping is a matter of ethics and art: it requires developing a kind of art and techne in the sense that it always involves technologies (...) and specific styles of experiencing and dealing with vulnerability, which depend on social and cultural context. It is suggested that we try to find better, perhaps less modern styles of coping with our vulnerability, recognize limits to our attempts to “design” our new forms, and explore what kinds of technologies we need for this project. (shrink)
Can we conceive of a philosophy of technology that is not technophobic, yet takes seriously the problem of alienation and human meaning-giving? This paperretrieves the concern with alienation, but brings it into dialogue with more recent philosophy of technology. It defines and responds to the problem of alienation in a way that avoids both old-style human-centered approaches and contemporary thingcentered or hybridity approaches. In contrast to the latter, it proposes to reconcile subject and object not at the ontic level but (...) at the ontological, transcendental level and at the level of skilled activity. Taking inspiration from Dreyfus’s reading of Heidegger and engaging critically with the work of Borgmann and Arendt, it explores a phenomenology and ethics of skill. It is concluded that new and emerging technologies must be evaluated not only as artifacts and their consequences, but also in terms of the skills and activities they involve and require. Do they promote engagement with the world and with others, thus making us into better persons? (shrink)
Paul Ricœur has been one of the most influential and intellectually challenging philosophers of the last century, and his work has contributed to a vast array of fields: studies of language, of history, of ethics and politics. However, he has up until recently only had a minor impact on the philosophy of technology. Interpreting Technology aims to put Ricœur’s work at the centre of contemporary philosophical thinking concerning technology. It investigates his project of critical hermeneutics for rethinking established theories of (...) technology, the growing ethical and political impacts of technologies on the modern lifeworld, and ways of analysing global sociotechnical systems such as the Internet. Ricœur’s philosophy allows us to approach questions such as: how could narrative theory enhance our understanding of technological mediation? How can our technical practices be informed by the ethical aim of living the good life, with and for others, in just institutions? And how does the emerging global media landscape shape our sense of self, and our understanding of history? These questions are more timely than ever, considering the enormous impact technologies have on daily life in the 21st century: on how we shape ourselves with health apps, how we engage with one-another through social media, and how we act politically through digital platforms. (shrink)
Purpose This paper aims to show how the production of meaning is a matter of people interacting with technologies, throughout their appropriation and in co-performances. The researchers rely on the case of household-based voice assistants that endorse speaking as a primary mode of interaction with technologies. By analyzing the ethical significance of voice assistants as co-producers of moral meaning intervening in the material and socio-cultural space of the home, the paper invites their informed and critical use as a form of (...) empowerment while acknowledging their productive role in human values. Design/methodology/approach This paper presents an empirically informed philosophical analysis. Using the conceptual frameworks of technological appropriation and human–technological performances, while drawing on the interviews with voice assistants’ users and literature studies, this paper unravels the meaning-making processes in relation to these technologies in the household use. It additionally draws on a Wittgensteinian perspective to attend to the productive role of language and link to wider cultural meanings. Findings By combining two approaches, appropriation and technoperformances, and analyzing the themes of privacy, power and knowledge, the paper shows how voice assistants help to shape a specific moral subject: embodied in space and made as it performatively responds to the device and makes sense of it together with others. Originality/value The researchers show how through making sense of technologies in appropriation and performatively responding to them, people can change and intervene in the power structures that technologies suggest. (shrink)
Ethical reflections on military robotics can be enriched by a better understanding of the nature and role of these technologies and by putting robotics into context in various ways. Discussing a range of ethical questions, this paper challenges the prevalent assumptions that military robotics is about military technology as a mere means to an end, about single killer machines, and about “military” developments. It recommends that ethics of robotics attend to how military technology changes our aims, concern itself not only (...) with individual robots but also and especially with networks and swarms, and adapt its conceptions of responsibility to the rise of such cloudy and unpredictable systems, which rely on decentralized control and buzz across many spheres of human activity. (shrink)
Various arguments have been provided for drawing non-humans such as animals and artificial agents into the sphere of moral consideration. In this paper, I argue for a shift from an ontological to a social-philosophical approach: instead of asking what an entity is, we should try to conceptually grasp the quasi-social dimension of relations between non-humans and humans. This allows me to reconsider the problem of justice, in particular distributive justice . Engaging with the work of Rawls, I show that an (...) expansion of the contractarian framework to non-humans causes an important problem for liberalism, but can be justified by a contractarian argument. Responding to Bell’s and Nussbaum’s comments on Rawls, I argue that we can justify drawing non-humans into the sphere of distributive justice by relying on the notion of a co-operative scheme. I discuss what co-operation between humans and non-humans can mean and the extent to which it depends on properties. I conclude that we need to imagine principles of ecological and technological distributive justice. (shrink)
Although the role of imagination in moral reasoning is often neglected, recent literature, mostly of pragmatist signature, points to imagination as one of its central elements. In this article we develop some of their arguments by looking at the moral role of imagination in practice, in particular the practice of neonatal intensive care. Drawing on empirical research, we analyze a decision-making process in various stages: delivery, staff meeting, and reflection afterwards. We show how imagination aids medical practitioners demarcating moral categories, (...) tuning their actions, and exploring long-range consequences of decisions. We argue that imagination helps to bring about at least four kinds of integration in the moral decision-making process: personal integration by creating a moral self-image in moments of reflection; social integration by aiding the conciliation of the diverging perspectives of the people involved; temporal integration by facilitating the parties to transcend the present moment and connect past, present, and future; and epistemological integration by helping to combine the various forms of knowledge and experience needed to make moral decisions. Furthermore, we argue that the role of imagination in these moral decision-processes is limited in several significant ways. Rather than being a solution itself, it is merely an aid and cannot replace the decision itself. Finally, there are also limits to the practical relevance of this theoretical reflection. In the end, it is up to care professionals as reflective practitioners to re-imagine the practice of intensive care and make the right decisions with hope and imagination. (shrink)
There is a gap between, on the one hand, the tragic character of human action and, on the other hand, our moral and legal conceptions of responsibility that focus on individual agency and absolute guilt. Drawing on Kierkegaardâs understanding of tragic action and engaging with contemporary discourse on moral luck, poetic justice, and relational responsibility, this paper argues for a reform of our legal practices based on a less âharshâ (Kierkegaard) conception of moral and legal responsibility and directed more at (...) empathic understanding based on the emotional and imaginative appreciation of personal narratives. This may help our societies and communities to better cope with unacceptable deeds by individuals who are neither criminals nor patients, to make room for praise as well as blame and punishment, and to set up practices and institutions that do not rely on a conception of responsibility that is hard to bear for all of us. (shrink)
Transitions in monetary technologies raise novel ethical and philosophical questions. One prominent transition concerns the introduction of cryptocurrencies, which are digital currencies based on blockchain technology. Bitcoin is an example of a cryptocurrency. In this paper we discuss ethical issues raised by cryptocurrencies by conceptualising them as what we call “narrative technologies”. Drawing on the work of Ricoeur and responding to the work of Searle, we elaborate on the social and linguistic dimension of money and cryptocurrencies, and explore the implications (...) of our proposed theoretical framework for the ethics of cryptocurrencies. In particular, taking a social-narrative turn, we argue that technologies have a temporal and narrative character: that they are made sense of by means of individual and collective narratives but also themselves co-constitute those narratives and inter-human and social relations; configuring events in a meaningful temporal whole. We show how cryptocurrencies such as Bitcoin dynamically re-configure social relations and explore the consequent ethical implications. (shrink)
An influential approach to engineering ethics is based on codes of ethics and the application of moral principles by individual practitioners. However, to better understand the ethical problems of complex technological systems and the moral reasoning involved in such contexts, we need other tools as well. In this article, we consider the role of imagination and develop a concept of distributed responsibility in order to capture a broader range of human abilities and dimensions of moral responsibility. We show that in (...) the case of Snorre A, a near-disaster with an oil and gas production installation, imagination played a crucial and morally relevant role in how the crew coped with the crisis. For example, we discuss the role of scenarios and images in the moral reasoning and discussion of the platform crew in coping with the crisis. Moreover, we argue that responsibility for increased system vulnerability, turning an undesired event into a near-disaster, should not be ascribed exclusively, for example to individual engineers alone, but should be understood as distributed between various actors, levels and times. We conclude that both managers and engineers need imagination to transcend their disciplinary perspectives in order to improve the robustness of their organisations and to be better prepared for crisis situations. We recommend that education and training programmes should be transformed accordingly. (shrink)
If we want to be autonomous, what do we want? The author shows that contemporary value-neutral and metaphysically economical conceptions of autonomy, such as that of Harry Frankfurt, face a serious problem. Drawing on Plato, Augustine, and Kant, this book provides a sketch of how "ancient" and "modern" can be reconciled to solve it. But at what expense? It turns out that the dominant modern ideal of autonomy cannot do without a costly metaphysics if it is to be coherent.