In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human painting as art to roughly the same extent. However, people are much less (...) willing to consider robots as artists than humans, which is partially explained by the fact that they are less disposed to attribute artistic intentions to robots. (shrink)
Seit 2014 erscheinen die Bände der renommierten Wiener Reihe bei De Gruyter. Das äußere Layout der Bände wurde modernisiert, inhaltlich und personell jedoch ist das Profil der seit mehr als zwei Jahrzehnten erscheinenden Buchreihe von Kontinuität geprägt. Die Bände sind jeweils einer aktuellen philosophischen Fragestellung gewidmet. Eine internationale Autorenschaft und die Veröffentlichung fremdsprachiger Beiträge sind Elemente des Programms. Die Reihe will dazu beitragen, dogmatische Abgrenzungen zwischen philosophischen Schulen und Traditionen abzubauen.
Highly sophisticated capabilities of artificialintelligence have skyrocketed its popularity across many industry sectors globally. The public sector is one of these. Many cities around the world are trying to position themselves as leaders of urban innovation through the development and deployment of AI systems. Likewise, increasing numbers of local government agencies are attempting to utilise AI technologies in their operations to deliver policy and generate efficiencies in highly uncertain and complex urban environments. While the popularity of AI (...) is on the rise in urban policy circles, there is limited understanding and lack of empirical studies on the city manager perceptions concerning urban AI systems. Bridging this gap is the rationale of this study. The methodological approach adopted in this study is twofold. First, the study collects data through semi-structured interviews with city managers from Australia and the US. Then, the study analyses the data using the summative content analysis technique with two data analysis software. The analysis identifies the following themes and generates insights into local government services: AI adoption areas, cautionary areas, challenges, effects, impacts, knowledge basis, plans, preparedness, roadblocks, technologies, deployment timeframes, and usefulness. The study findings inform city managers in their efforts to deploy AI in their local government operations, and offer directions for prospective research. (shrink)
1. WHAT IS ARTIFICIALINTELLIGENCE? One of the fascinating aspects of the field of artificialintelligence (AI) is that the precise nature of its subject ..
In Logics for ArtificialIntelligence, Raymond Turner leads us on a whirl-wind tour of nonstandard logics and their general applications to Al and computer science.
Recent work in artificialintelligence has increasingly turned to argumentation as a rich, interdisciplinary area of research that can provide new methods related to evidence and reasoning in the area of law. Douglas Walton provides an introduction to basic concepts, tools and methods in argumentation theory and artificialintelligence as applied to the analysis and evaluation of witness testimony. He shows how witness testimony is by its nature inherently fallible and sometimes subject to disastrous failures. At (...) the same time such testimony can provide evidence that is not only necessary but inherently reasonable for logically guiding legal experts to accept or reject a claim. Walton shows how to overcome the traditional disdain for witness testimony as a type of evidence shown by logical positivists, and the views of trial sceptics who doubt that trial rules deal with witness testimony in a way that yields a rational decision-making process. (shrink)
Some artificialintelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can (...) arise in some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people’s political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine. (shrink)
Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust artificialintelligence. Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest (...) milestones in the field, but they argue that a computer beating a human in Jeopardy! does not signal that we are on the doorstep of fully autonomous cars or superintelligent machines. The achievements in the field thus far have occurred in closed systems with fixed sets of rules, and these approaches are too narrow to achieve genuine intelligence. The real world, in contrast, is wildly complex and open-ended. How can we bridge this gap? What will the consequences be when we do? Taking inspiration from the human mind, Marcus and Davis explain what we need to advance AI to the next level, and suggest that if we are wise along the way, we won't need to worry about a future of machine overlords. If we focus on endowing machines with common sense and deep understanding, rather than simply focusing on statistical analysis and gatherine ever larger collections of data, we will be able to create an AI we can trust--in our homes, our cars, and our doctors' offices. Rebooting AI provides a lucid, clear-eyed assessment of the current science and offers an inspiring vision of how a new generation of AI can make our lives better. (shrink)
In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificialintelligence. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: the development of (...) a ‘good AI society’; the role and responsibility of the government, the private sector, and the research community in pursuing such a development; and where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a ‘good AI society’. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach. (shrink)
This book deals with the major philosophical issues in the theoretical framework of ArtificialIntelligence (AI) in particular and cognitive science in general.
This paper discusses the problem of responsibility attribution raised by the use of artificialintelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, (...) which draws attention to the issues of transparency and explainability. In contrast to standard discussions, however, it is then argued that this knowledge problem regarding agents of responsibility is linked to the other side of the responsibility relation: the addressees or “patients” of responsibility, who may demand reasons for actions and decisions made by using AI. Inspired by a relational approach, responsibility as answerability thus offers an important additional, if not primary, justification for explainability based, not on agency, but on patiency. (shrink)
Applications of artificialintelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a (...) global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity. (shrink)
Family health education is a must for every family, so that children can be taught how to protect their own health. However, in this era of artificialintelligence, many technical operations based on artificialintelligence are born, so the purpose of this study is to apply artificialintelligence technology to family health education. This paper proposes a fusion of artificialintelligence and IoT technologies. Based on the characteristics of artificialintelligence (...) technology, it combines ZigBee technology and RFID technology in the Internet of Things technology to design an artificialintelligence-based service system. Then it designs the theme of family health education by conducting a questionnaire on students’ family education and analyzing the results of the questionnaire. And it designs database and performance analysis experiments to improve the artificialintelligence-based family health education public service system designed in this paper. Finally, a comparative experiment between the family health education public service system based on artificialintelligence and the traditional health education method will be carried out. The experimental results show that the family health education public service system based on artificialintelligence has improved by 21.74% compared with the traditional family health education method; compared with the traditional family health education method, the health education effect of the family health education public service system based on artificialintelligence has increased by 13.89%. (shrink)
Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...) and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient. (shrink)
In this book, the author examines the ethical implications of ArtificialIntelligence systems as they integrate and replace traditional social structures in new sociocognitive-technological environments. She discusses issues related to the integrity of researchers, technologists, and manufacturers as they design, construct, use, and manage artificially intelligent systems; formalisms for reasoning about moral decisions as part of the behavior of artificial autonomous systems such as agents and robots; and design methodologies for social agents based on societal, moral, and (...) legal values. Throughout the book the author discusses related work, conscious of both classical, philosophical treatments of ethical issues and the implications in modern, algorithmic systems, and she combines regular references and footnotes with suggestions for further reading. This short overview is suitable for undergraduate students, in both technical and non-technical courses, and for interested and concerned researchers, practitioners, and citizens. (shrink)
This book reports on the results of the third edition of the premier conference in the field of philosophy of artificialintelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and (...) AI safety; and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing. (shrink)
Insofar as artificialintelligence is to be used to guide automated systems in their interactions with humans, the dominant view is probably that it would be appropriate to programme them to maximize (expected) utility. According to utilitarianism, which is a characteristically western conception of moral reason, machines should be programmed to do whatever they could in a given circumstance to produce in the long run the highest net balance of what is good for human beings minus what is (...) bad for them. In this essay, I appeal to values that are characteristically African––but that will resonate with those from a variety of moral-philosophical traditions, particularly in the Global South––to cast doubt on a utilitarian approach. Drawing on norms salient in sub-Saharan ethics, I provide four reasons for thinking it would be immoral for automated systems governed by artificialintelligence to maximize utility. In catchphrases, I argue that utilitarianism cannot make adequate sense of the ways that human dignity, group rights, family first, and (surprisingly) self-sacrifice should determine the behaviour of smart machines. (shrink)
Today’s capitalist economy has forced the human person to seek work as a means of survival, by so doing stripping from work its value as a good intrinsically connected to the nature and dignity of the human person. Modern science and technology has been a fundamental tool in the advancement and sustainability of this orientation of capitalist economy. Hence, the advancement in the research in Artificialintelligence (AI), is not only redefining the meaning of work but more so (...) it questions the metaphysical notion of the human person and the theological notion of work as an intrinsic part in the selfhood and dignity of the human person. This work aims at exposing the possible implications of the development of ArtificialIntelligence on the selfhood and dignity of the human person in respect to the social teachings of the Catholic Church. This work shall be an interplay of philosophy and theology of ArtificialIntelligence. (shrink)
For the philosopher, the most critical and fundamental question in the project of ArtificialIntelligence is the question of intelligence or cognition in general. From the beginning of the research in “thinking Machining”, or ArtificialIntelligence as it later became known, the key question is: What makes a thing intelligent or what constitutes intelligence? Since, intelligence, is a fundamental activity of the mind, the question, has been: Whether the mind is a computer or (...) is the computer a mind? Many philosophers who have and are engaging and interrogating these problematics, do it from the perspective of modern and contemporary philosophy of mind, consciousness and language. The objective of this work is to interrogate the question of “intelligence” in ArtificialIntelligence from the perspective of the Scholastics’ notion of Intellectus. (shrink)
The enduring innovations in artificialintelligence and robotics offer the promised capacity of computer consciousness, sentience and rationality. The development of these advanced technologies have been considered to merit rights, however these can only be ascribed in the context of commensurate responsibilities and duties. This represents the discernable next-step for evolution in this field. Addressing these needs requires attention to the philosophical perspectives of moral responsibility for artificialintelligence and robotics. A contrast to the moral status (...) of animals may be considered. At a practical level, the attainment of responsibilities by artificialintelligence and robots can benefit from the established responsibilities and duties of human society, as their subsistence exists within this domain. These responsibilities can be further interpreted and crystalized through legal principles, many of which have been conserved from ancient Roman law. The ultimate and unified goal of stipulating these responsibilities resides through the advancement of mankind and the enduring preservation of the core tenets of humanity. (shrink)
Presupposing no familiarity with the technical concepts of either philosophy or computing, this clear introduction reviews the progress made in AI since the inception of the field in 1956. Copeland goes on to analyze what those working in AI must achieve before they can claim to have built a thinking machine and appraises their prospects of succeeding. There are clear introductions to connectionism and to the language of thought hypothesis which weave together material from philosophy, artificialintelligence and (...) neuroscience. John Searle's attacks on AI and cognitive science are countered and close attention is given to foundational issues, including the nature of computation, Turing Machines, the Church-Turing Thesis and the difference between classical symbol processing and parallel distributed processing. The book also explores the possibility of machines having free will and consciousness and concludes with a discussion of in what sense the human brain may be a computer. (shrink)
This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values in to AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on ArtificialIntelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be (...) used to further strengthen design coordination efforts. VSD is shown to be both able to distill these common values as well as provide a framework for stakeholder coordination. (shrink)
Artificialintelligence research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime. AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young (...) and inherently interdisciplinary area—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing ethicists, policy-makers, and law enforcement organisations with a synthesis of the current problems, and a possible solution space. (shrink)
ArtificialIntelligence and Scientific Method examines the remarkable advances made in the field of AI over the past twenty years, discussing their profound implications for philosophy. Taking a clear, non-technical approach, Donald Gillies shows how current views on scientific method are challenged by this recent research, and suggests a new framework for the study of logic. Finally, he draws on work by such seminal thinkers as Bacon, Gdel, Popper, Penrose, and Lucas, to address the hotly-contested question of whether (...) computers might become intellectually superior to human beings. (shrink)
The increasing use of ArtificialIntelligence for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily (...) focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring. (shrink)
The development of artificialintelligence in medicine raises fundamental ethical issues. As one example, AI systems in the field of mental health successfully detect signs of mental disorders...
Wagman examines the emulation of human cognition by artificialintelligence systems. The book provides detailed examples of artificialintelligence programs (such as the FERMI System and KEKADA program) accomplishing highly intellectual tasks.
An argument with roots in ancient Greek philosophy claims that only humans are capable of a certain class of thought termed conceptual, as opposed to perceptual thought, which is common to humans, the higher animals, and some machines. We outline the most detailed modern version of this argument due to Mortimer Adler, who in the 1960s argued for the uniqueness of the human power of conceptual thought. He also admitted that if conceptual thought were ever manifested by machines, such an (...) achievement would contradict his conclusion. We revisit Adler’s criterion in the light of the past five decades of artificial-intelligence research, and refine it in view of the classical definitions of perceptual and conceptual thought. We then examine two well-publicized examples of creative works produced by AI systems and show that evidence for conceptual thought appears to be lacking in them. Although clearer evidence for conceptual thought on the part of AI systems may arise in the near future, especially if the global neuronal workspace theory of consciousness prevails over its rival, integrated information theory, the question of whether AI systems can engage in conceptual thought appears to be still open. (shrink)
In July 2017, China’s State Council released the country’s strategy for developing artificialintelligence, entitled ‘New Generation ArtificialIntelligence Development Plan’. This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this (...) article, we focus on the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents. (shrink)
The moral enhancement of human beings is a constant theme in the history of humanity. Today, faced with the threats of a new, globalised world, concern over this matter is more pressing. For this reason, the use of biotechnology to make human beings more moral has been considered. However, this approach is dangerous and very controversial. The purpose of this article is to argue that the use of another new technology, AI, would be preferable to achieve this goal. Whilst several (...) proposals have been made on how to use AI for moral enhancement, we present an alternative that we argue to be superior to other proposals that have been developed. (shrink)
An exploration of the important philosophical issues and concerns related to artificialintelligence. The book focuses on the philosphical, rather than the technical or technological aspects of artificialintelligence.
Some recent developments in ArtificialIntelligence—especially the use of machine learning systems, trained on big data sets and deployed in socially significant and ethically weighty contexts—have led to a number of calls for “transparency”. This paper explores the epistemological and ethical dimensions of that concept, as well as surveying and taxonomising the variety of ways in which it has been invoked in recent discussions. Whilst “outward” forms of transparency may be straightforwardly achieved, what I call “functional” transparency about (...) the inner workings of a system is, in many cases, much harder to attain. In those situations, I argue that contestability may be a possible, acceptable, and useful alternative so that even if we cannot understand how a system came up with a particular output, we at least have the means to challenge it. (shrink)
Purpose The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into the ethical consequences of artificialintelligence. This is reflected by many outputs across academia, policy and the media. Many of these outputs aim to provide guidance to particular stakeholder groups. It has recently been shown that there is a large degree of convergence in terms of the principles upon which these guidance (...) documents are based. Despite this convergence, it is not always clear how these principles are to be translated into practice. Design/methodology/approach In this paper, the authors move beyond the high-level ethical principles that are common across the AI ethics guidance literature and provide a description of the normative content that is covered by these principles. The outcome is a comprehensive compilation of normative requirements arising from existing guidance documents. This is not only required for a deeper theoretical understanding of AI ethics discussions but also for the creation of practical and implementable guidance for developers and users of AI. Findings In this paper, the authors therefore provide a detailed explanation of the normative implications of existing AI ethics guidelines but directed towards developers and organisational users of AI. The authors believe that the paper provides the most comprehensive account of ethical requirements in AI currently available, which is of interest not only to the research and policy communities engaged in the topic but also to the user communities that require guidance when developing or deploying AI systems. Originality/value The authors believe that they have managed to compile the most comprehensive document collecting existing guidance which can guide practical action but will hopefully also support the consolidation of the guidelines landscape. The authors’ findings should also be of academic interest and inspire philosophical research on the consistency and justification of the various normative statements that can be found in the literature. (shrink)
This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificialintelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as (...) Kantianism or utilitarianism, that reason-responsive people can be persuaded by. This proposal can play a normative role and it is also a more promising avenue towards moral enhancement. It is more promising because such a system can be designed to take advantage of the sometimes undue trust that people put in automated technologies. We could therefore expect a well-designed moral reasoner system to be able to persuade people that may not be persuaded by similar arguments from other people. So, all things considered, there is hope in artificial intelli-gence for moral enhancement, but not in artificialintelligence that relies solely on ambient intelligence technologies. (shrink)
Presupposing no familiarity with the technical concepts of either philosophy or computing, this clear introduction reviews the progress made in AI since the inception of the field in 1956. Copeland goes on to analyze what those working in AI must achieve before they can claim to have built a thinking machine and appraises their prospects of succeeding.There are clear introductions to connectionism and to the language of thought hypothesis which weave together material from philosophy, artificialintelligence and neuroscience. (...) John Searle's attacks on AI and cognitive science are countered and close attention is given to foundational issues, including the nature of computation, Turing Machines, the Church-Turing Thesis and the difference between classical symbol processing and parallel distributed processing. The book also explores the possibility of machines having free will and consciousness and concludes with a discussion of in what sense the human brain may be a computer. (shrink)
AI, especially in the case of Deepfakes, has the capacity to undermine our confidence in the original, genuine, authentic nature of what we see and hear. And yet digital technologies, in the form of databases and other detection tools also make it easier to spot forgeries and to establish the authenticity of a work. Using the notion of ectypes, this paper discusses current conceptions of authenticity and reproduction and examines how, in the future, these might be adapted for use in (...) the digital sphere. (shrink)
: Artificial intelligences and robots increasingly mimic human mental powers and intelligent behaviour. However, many authors claim that ascribing human mental powers to them is both conceptually mistaken and morally dangerous. This article defends the view that artificial intelligences can have human-like mental powers, by claiming that both human and artificial minds can be seen as extended minds – along the lines of Chalmers and Clark’s view of mind and cognition. The main idea of this article is (...) that the Extended Mind Model is independently plausible and can easily be extended to artificial intelligences, providing a solid base for concluding that artificial intelligences possess minds. This may warrant viewing them as morally responsible agents. Keywords: ArtificialIntelligence; Mind; Moral Responsibility; Extended Cognition Intelligenze artificiali come menti estese. Perché no? Riassunto: Intelligenze artificiali e robot simulano in misura sempre crescente le capacità mentali e i comportamenti intelligenti umani. Molti autori, tuttavia, sostengono che attribuire loro capacità mentali umane sia concettualmente errato e moralmente pericoloso. In questo lavoro si difende l’idea per cui le intelligenze artificiali possano avere capacità mentali simili a quelle umane, sostenendo che menti umane e artificiali possano essere considerate come menti estese – sulla scorta della prospettiva di Chalmers e Clark circa la mente e la cognizione. L’idea principale alla base di questo lavoro è che il Modello della Mente Estesa abbia plausibilità a prescindere e che possa essere facilmente esteso alle intelligenze artificiali, fornendo una base solida per concludere che le intelligenze artificiali possiedano delle menti e si possano considerare come agenti moralmente responsabili. Parole chiave: Intelligenza artificiale; Mente; Responsabilità morale; Conoscenza estesa. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. (...) We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificialintelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have (...) rights and a moral status? I will tentatively defend the (increasingly widely held) view that, under certain conditions, artificial intelligent systems, like corporate entities, might qualify as responsible moral agents and as holders of limited rights and legal personhood. I will further suggest that regulators should permit the use of autonomous artificial systems in high-stakes settings only if they are engineered to function as moral (not just intentional) agents and/or there is some liability-transfer arrangement in place. I will finally raise the possibility that if artificial systems ever became phenomenally conscious, there might be a case for extending a stronger moral status to them, but argue that, as of now, this remains very hypothetical. (shrink)
Artificialintelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made (...) and used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values (...) incorporated into their design, and are, as such, dependent on human agents. As a consequence, AI systems cannot be held morally responsible, and responsibility attributions should take into account normative and social aspects involved in the design and deployment of the said AI. My argument falls in line with approaches critical of attributing moral agency to artificial agents, but draws from the philosophy of action, highlighting further philosophical underpinnings of current debates on artificial agency. (shrink)
This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, which combines these elements (...) in a systematic way, has considerable advantages in this context. Third, the central challenge for theorists is not to identify ‘true’ moral principles for AI; rather, it is to identify fair principles for alignment that receive reflective endorsement despite widespread variation in people’s moral beliefs. The final part of the paper explores three ways in which fair principles for AI alignment could potentially be identified. (shrink)
The concept of artificialintelligence is not new nor is the notion that it should be granted legal protections given its influence on human activity. What is new, on a relative scale, is the notion that artificialintelligence can possess citizenship—a concept reserved only for humans, as it presupposes the idea of possessing civil duties and protections. Where there are several decades’ worth of writing on the concept of the legal status of computational artificial artefacts (...) in the USA and elsewhere, it is surprising that law makers internationally have come to a standstill to protect our silicon brainchildren. In this essay, it will be assumed that future artificial entities, such as Sophia the Robot, will be granted citizenship on an international scale. With this assumption, an analysis of rights will be made with respect to the needs of a non-biological intelligence possessing legal and civic duties akin to those possessed by humanity today. This essay does not present a full set of rights for artificialintelligence—instead, it aims to provide international jurisprudence evidence aliunde ab extra de lege lata for any future measures made to protect non-biological intelligence. (shrink)