Citations of:
Add citations
You must login to add citations.
|
|
Technologies that deploy data science methods are liable to result in epistemic harms involving the diminution of individuals with respect to their standing as knowers or their credibility as sources of testimony. Not all harms of this kind are unjust but when they are we ought to try to prevent or correct them. Epistemically unjust harms will typically intersect with other more familiar and well-studied kinds of harm that result from the design, development, and use of data science technologies. However, (...) No categories |
|
Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients’ likelihood (...) |
|
Fairness is one of the most prominent values in the Ethics and Artificial Intelligence debate and, specifically, in the discussion on algorithmic decision-making. However, while the need for fairness in ADM is widely acknowledged, the very concept of fairness has not been sufficiently explored so far. Our paper aims to fill this gap and claims that an ethically informed re-definition of fairness is needed to adequately investigate fairness in ADM. To achieve our goal, after an introductory section aimed at clarifying (...) No categories |
|
Machine learning (ML) systems play an increasingly relevant role in medicine and healthcare. As their applications move ever closer to patient care and cure in clinical settings, ethical concerns about the responsibility of their use come to the fore. I analyse an aspect of responsible ML use that bears not only an ethical but also a significant epistemic dimension. I focus on ML systems’ role in mediating patient–physician relations. I thereby consider how ML systems may silence patients’ voices and relativise (...) |
|
Despite the rapid adoption of technology in human resource departments, there is little empirical work that examines the potential challenges of algorithmic decision-making in the recruitment process. In this paper, we take the perspective of job applicants and examine how they perceive the use of algorithms in selection and recruitment. Across four studies on Amazon Mechanical Turk, we show that people in the role of a job applicant perceive algorithm-driven recruitment processes as less fair compared to human only or algorithm-assisted (...) |
|
Mhealth technology is mushrooming world-wide and, in a variety of forms, reaches increasing numbers of users in ever-widening contexts and virtually independent from standard medical evidence assessment. Yet, debate on the broader societal impact including in particular mapping and classification of ethical issues raised has been limited. This article, as part of an ongoing empirically informed ethical research project, provides an overview of ethical issues of mhealth applications with a specific focus on implications on autonomy as a key notion in (...) |
|
Data analytics and data-driven approaches in Machine Learning are now among the most hailed computing technologies in many industrial domains. One major application is predictive analytics, which is used to predict sensitive attributes, future behavior, or cost, risk and utility functions associated with target groups or individuals based on large sets of behavioral and usage data. This paper stresses the severe ethical and data protection implications of predictive analytics if it is used to predict sensitive information about single individuals or (...) |
|
As a relational concept, responsible innovation can be made more tangible by asking innovation of what and responsibility of whom for what? Arranging the scattered field of responsible innovation comprehensively, starting from an anthropological point of view, into five fields of tension and five categories of spearheads, may be theoretically and practically helpful while offering suggestions for both research and management. |
|
Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...) |
|
This study investigates the ethical use of Big Data and Artificial Intelligence technologies —using an empirical approach. The paper categorises the current literature and presents a multi-case study of 'on-the-ground' ethical issues that uses qualitative tools to analyse findings from ten targeted case-studies from a range of domains. The analysis coalesces identified singular ethical issues,, into clusters to offer a comparison with the proposed classification in the literature. The results show that despite the variety of different social domains, fields, and (...) |
|
This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...) |
|
We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of (...) No categories |
|
This paper sets out the notion of a current “biopolitical turn of digital capitalism” resulting from the increasing deployment of AI and data analytics technologies in the public sector. With applications of AI-based automated decisions currently shifting from the domain of business to customer relations to government to citizen relations, a new form of governance arises that operates through “algorithmic social selection”. Moreover, the paper describes how the ethics of AI is at an impasse concerning these larger societal and socioeconomic (...) No categories |
|
The paper deals with the difference between who and what we are in order to take an ethical perspective on algorithms and their regulation. The present casting of ourselves as homo digitalis implies the possibility of projecting who we are as social beings sharing a world, into the digital medium, thereby engendering what can be called digital whoness, or a digital reification of ourselves. A main ethical challenge for the evolving digital age consists in unveiling this ethical difference, particularly when (...) |
|
Digital hyperconnectivity is a defining fact of our time. In addition to recasting social interaction, culture, economics, and politics, it has profoundly transformed the self. It has created new ways of being and constructing a self, but also new ways of being constructed as a self from the outside, new ways of being configured, represented, and governed as a self by sociotechnical systems. Rather than analyze theories of the self, I focus on practices of the self, using this expression in (...) No categories |
|
Fairness of Artificial Intelligence decisions has become a big challenge for governments, companies, and societies. We offer a theoretical contribution to consider AI ethics outside of high-level and top-down approaches, based on the distinction between “reality” and “world” from Luc Boltanski. To do so, we provide a new perspective on the debate on AI fairness and show that criticism of ML unfairness is “realist”, in other words, grounded in an already instituted reality based on demographic categories produced by institutions. Second, (...) |
|
Organizations increasingly rely on algorithm-based HR decision-making to monitor their employees. This trend is reinforced by the technology industry claiming that its decision-making tools are efficient and objective, downplaying their potential biases. In our manuscript, we identify an important challenge arising from the efficiency-driven logic of algorithm-based HR decision-making, namely that it may shift the delicate balance between employees’ personal integrity and compliance more in the direction of compliance. We suggest that critical data literacy, ethical awareness, the use of participatory (...) |
|
Biases in cognition are ubiquitous. Social psychologists suggested biases and stereotypes serve a multifarious set of cognitive goals, while at the same time stressing their potential harmfulness. Recently, biases and stereotypes became the purview of heated debates in the machine learning community too. Researchers and developers are becoming increasingly aware of the fact that some biases, like gender and race biases, are entrenched in the algorithms some AI applications rely upon. Here, taking into account several existing approaches that address the (...) |
|
There is a long history of the science of intelligent machines and its potential to provide scientific insights have been debated since the dawn of AI. In particular, there is renewed interest in the role of AI in research and research policy as an enabler of new methods, processes, management and evaluation which is still relatively under-explored. This empirical paper explores interviews with leading scholars on the potential impact of AI on research practice and culture through deductive, thematic analysis to (...) |
|
Enacting an AI system typically requires three iterative phases where AI engineers are in command: selection and preparation of the data, selection and configuration of algorithmic tools, and fine-tuning of the different parameters on the basis of intermediate results. Our main hypothesis is that these phases involve practices with ethical questions. This paper maps these ethical questions and proposes a way to address them in light of a neo-republican understanding of freedom, defined as absence of domination. We thereby identify different (...) |
|
This article presents a conceptual investigation into the value impacts and relations of algorithms in the domain of justice and security. As a conceptual investigation, it represents one step in a value sensitive design based methodology. Here, we explicate and analyse the expression of values of accuracy, privacy, fairness and equality, property and ownership, and accountability and transparency in this context. We find that values are sensitive to disvalue if algorithms are designed, implemented or deployed inappropriately or without sufficient consideration (...) |
|
This short commentary on Peters identifies the entrenchment of political positions as one additional concern related to algorithmic political bias, beyond those identified by Peters. First, it is observed that the political positions detected and predicted by algorithms are typically contingent and largely explained by “political tribalism”, as argued by Brennan. Second, following Hacking, the social construction of political identities is analyzed and it is concluded that algorithmic political bias can contribute to such identities. Third, following Nozick, it is argued (...) |
|
As information technologies have become synonymous with progress in modern society, several ethical concerns have surfaced about their societal implications. In the past few decades, information technologies have had a value-laden impact on social evolution. However, there is limited agreement on the responsibility of businesses and innovators concerning the ethical aspects of information technologies. There is a need to understand the role of business incentives and attitudes in driving technological progress and to understand how they steer the ethics discourse on (...) No categories |
|
The concept of meaningful work has recently received increased attention in philosophy and other disciplines. However, the impact of the increasing robotization of the workplace on meaningful work has received very little attention so far. Doing work that is meaningful leads to higher job satisfaction and increased worker well-being, and some argue for a right to access to meaningful work. In this paper, we therefore address the impact of robotization on meaningful work. We do so by identifying five key aspects (...) No categories |
|
Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers have (...) |
|
In this paper I ask whether mathematicians should swear an oath similar to the Hippocratic oath sworn by some medical professionals as a means to foster morally praiseworthy engagement with the ethical dimensions of mathematics. I individuate four dimensions in which mathematics is ethically charged: (1) applying mathematical knowledge to the world can cause harm, (2) participation of mathematicians in morally contentious practices is an ethical issue, (3) mathematics as a social activity faces relevant ethical concerns, (4) mathematical knowledge itself (...) No categories |
|
Today humanity is in the midst of the massive expansion of new and fundamental technology, represented by advanced artificial intelligence (AI) systems. The ongoing revolution of these technologies and their profound impact across various sectors, has triggered discussions about the characteristics and values that should guide their use and development in a responsible manner. In this paper, we conduct a systematic literature review with the aim of pointing out existing challenges and required principles in AI-based systems in different industries. We (...) |
|
|
|
In this paper, I examine whether the use of artificial intelligence (AI) and automated decision-making (ADM) aggravates issues of discrimination as has been argued by several authors. For this purpose, I first take up the lively philosophical debate on discrimination and present my own definition of the concept. Equipped with this account, I subsequently review some of the recent literature on the use AI/ADM and discrimination. I explain how my account of discrimination helps to understand that the general claim in (...) |
|
The potential use, relevance, and application of AI and other technologies in the democratic process may be obvious to some. However, technological innovation and, even, its consideration may face an intuitive push-back in the form of algorithm aversion (Dietvorst et al. J Exp Psychol 144(1):114–126, 2015). In this paper, I confront this intuition and suggest that a more ‘extreme’ form of technological change in the democratic process does not necessarily result in a worse outcome in terms of the fundamental concepts (...) |
|
In this article, we explore how digital marketers think about marketing in the age of Big Data surveillance, automatic computational analyses, and algorithmic shaping of choice contexts. Our starting point is a contradiction at the heart of digital marketing namely that digital marketing brings about unprecedented levels of consumer empowerment and autonomy and total control over and manipulation of consumer decision-making. We argue that this contradiction of digital marketing is resolved via the notion of relevance, which represents what Fredric Jameson (...) No categories |
|
We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...) No categories |
|
Although existing work draws attention to a range of obstacles in realizing fair AI, the field lacks an account that emphasizes how these worries hang together in a systematic way. Furthermore, a review of the fair AI and philosophical literature demonstrates the unsuitability of ‘treat like cases alike’ and other intuitive notions as conceptions of fairness. That review then generates three desiderata for a replacement conception of fairness valuable to AI research: It must provide a meta-theory for understanding tradeoffs, entailing (...) No categories |
|
Self-driving cars currently face a lot of technological problems that need to be solved before the cars can be widely used. However, they also face ethical problems, among which the question of crash-optimization algorithms is most prominently discussed. Reviewing current debates about whether we should use the ethics of the Trolley Dilemma as a guide towards designing self-driving cars will provide us with insights about what exactly ethical research does. It will result in the view that although we need the (...) |
|
AI systems have often been found to contain gender biases. As a result of these gender biases, AI routinely fails to adequately recognize the needs, rights, and accomplishments of women. In this article, we use Axel Honneth’s theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Furthermore, we argue that Honneth’s theory of recognition (...) |
|
Though rapid collection of big data is ubiquitous across domains, from industry settings to academic contexts, the ethics of big data collection and research are contested. A nexus of data ethics issues is the concept of creep, or repurposing of data for other applications or research beyond the conditions of original collection. Data creep has proven controversial and has prompted concerns about the scope of ethical oversight. Institutional review boards offer little guidance regarding big data, and problematic research can still (...) |
|
We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...) No categories |
|
There is increasing criticism on the use of big data and algorithms in public governance. Studies revealed that algorithms may reinforce existing biases and defy scrutiny by public officials using them and citizens subject to algorithmic decisions and services. In response, scholars have called for more algorithmic transparency and regulation. These are useful, but ex post solutions in which the development of algorithms remains a rather autonomous process. This paper argues that co-design of algorithms with relevant stakeholders from government and (...) No categories |
|
As the capabilities of artificial intelligence systems improve, it becomes important to constrain their actions to ensure their behaviour remains beneficial to humanity. A variety of ethical, legal and safety-based frameworks have been proposed as a basis for designing these constraints. Despite their variations, these frameworks share the common characteristic that decision-making must consider multiple potentially conflicting factors. We demonstrate that these alignment frameworks can be represented as utility functions, but that the widely used Maximum Expected Utility paradigm provides insufficient (...) |
|
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...) |
|
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...) |
|
Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect (...) |
|
Due to the ongoing advancements in technology, socio-technical collaboration has become increasingly prevalent. This poses challenges in terms of governance and accountability, as well as issues in various other fields. Therefore, it is crucial to familiarize decision-makers and researchers with the core of human–machine collaboration. This study introduces a taxonomy that enables identification of the very nature of human–machine interaction. A literature review has revealed that automation and technical autonomy are main parameters for describing and understanding such interaction. Both aspects (...) |
|
A major challenge with the increasing use of Artificial Intelligence applications is to manage the long-term societal impacts of this technology. Two central concerns that have emerged in this respect are that the optimized goals behind the data processing of AI applications usually remain opaque and the energy footprint of their data processing is growing quickly. This study thus explores how much people value the transparency and environmental sustainability of AI using the example of personal AI assistants. The results from (...) No categories |
|
In light of the recent emergence of predictive techniques in law enforcement to forecast crimes before they occur, this paper examines the temporal operation of power exercised by predictive policing algorithms. I argue that predictive policing exercises power through a paranoid style that constitutes a form of temporal governmentality. Temporality is especially pertinent to understanding what is ethically at stake in predictive policing as it is continuous with a historical racialized practice of organizing, managing, controlling, and stealing time. After first (...) |
|
Firms increasingly deploy algorithmic pricing approaches to determine what to charge for their goods and services. Algorithmic pricing can discriminate prices both dynamically over time and personally depending on individual consumer information. Although legal, the ethicality of such approaches needs to be examined as often they trigger moral concerns and sometimes outrage. In this research paper, we provide an overview and discussion of the ethical challenges germane to algorithmic pricing. As a basis for our discussion, we perform a systematic interpretative (...) |
|
Our contemporary condition is deeply infused with scientific-technological rationales. These influence and shape our ethical reasoning on war, including the moral status of civilians and the moral choices available to us. In this article, I discuss how technology shapes and directs the moral choices available to us by setting parameters for moral deliberation. I argue that technology has moral significance for just war thinking, yet this is often overlooked in attempts to assess who is liable to harm in war and (...) No categories |
|
Healthcare provision, like many other sectors of society, is undergoing major changes due to the increased use of data-driven methods and technologies. This increased reliance on big data in medicine can lead to shifts in the norms that guide healthcare providers and patients. Continuous critical normative reflection is called for to track such potential changes. This article presents the results of an interview-based study with 20 German and Swiss experts from the fields of medicine, life science research, informatics and humanities (...) No categories |
|
The notion of “responsibility gap” with artificial intelligence was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems (...) No categories |