This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related categories

83 found
Order:
1 — 50 / 83
  1. added 2019-01-17
    Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - forthcoming - AI and Society:1-17.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. added 2018-12-27
    Advances in Biotechnology: Human Genome Editing, Artificial Intelligence and the Fourth Industrial Revolution – the Law and Ethics Should Not Lag Behind.Ames Dhai - 2018 - South African Journal of Bioethics and Law 11 (2):58.
  3. added 2018-12-18
    AI Winter.Steven Umbrello - forthcoming - In Michael Klein & Philip Frana (eds.), Encyclopedia of Artificial Intelligence: The Past, Present, and Future of AI. Santa Barbara, USA: ABC-CLIO.
    Coined in 1984 at the American Association of Artificial intelligence (now the Association for the Advancement of Artificial Intelligence or AAAI), the various boom and bust periods of AI research and funding lead AI researchers Marvin Minsky and Roger Schank to refer to the then-impending bust period as an AI Winter. Canadian AI researcher Daniel Crevier describes the phenomenon as a domino effect that begins with cynicism in the AI research community that then trickles to mass media and finally to (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  4. added 2018-12-18
    Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach.Steven Umbrello - 2019 - Big Data and Cognitive Computing 3 (1):5.
    This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values in to AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on Artificial Intelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be used to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. added 2018-12-12
    The Problem of Superintelligence: Political, Not Technological.Wolfhart Totschnig - forthcoming - AI and Society:1-14.
    The thinkers who have reflected on the problem of a coming superintelligence have generally seen the issue as a technological problem, a problem of how to control what the superintelligence will do. I argue that this approach is probably mistaken because it is based on questionable assumptions about the behavior of intelligent agents and, moreover, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem posed by (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6. added 2018-11-21
    Autonomous Weapons Systems, the Frame Problem and Computer Security.Michał Klincewicz - 2015 - Journal of Military Ethics 14 (2):162-176.
    Unlike human soldiers, autonomous weapons systems are unaffected by psychological factors that would cause them to act outside the chain of command. This is a compelling moral justification for their development and eventual deployment in war. To achieve this level of sophistication, the software that runs AWS will have to first solve two problems: the frame problem and the representation problem. Solutions to these problems will inevitably involve complex software. Complex software will create security risks and will make AWS critically (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  7. added 2018-10-16
    AAAI: An Argument Against Artificial Intelligence.Sander Beckers - 2018 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8. added 2018-10-13
    Against Leben's Rawlsian Collision Algorithm for Autonomous Vehicles.Geoff Keeling - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Springer. pp. 259-272.
    Suppose that an autonomous vehicle encounters a situation where (i) imposing a risk of harm on at least one person is unavoidable; and (ii) a choice about how to allocate risks of harm between different persons is required. What does morality require in these cases? Derek Leben defends a Rawlsian answer to this question. I argue that we have reason to reject Leben’s answer.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. added 2018-09-12
    Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations.Johannes Himmelreich - 2018 - Ethical Theory and Moral Practice 21 (3):669-684.
    Trolley cases are widely considered central to the ethics of autonomous vehicles. We caution against this by identifying four problems. Trolley cases, given technical limitations, rest on assumptions that are in tension with one another. Furthermore, trolley cases illuminate only a limited range of ethical issues insofar as they cohere with a certain design framework. Furthermore, trolley cases seem to demand a moral answer when a political answer is called for. Finally, trolley cases might be epistemically problematic in several ways. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. added 2018-09-10
    Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2018 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Berlin: Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  11. added 2018-08-21
    Atomically Precise Manufacturing and Responsible Innovation: A Value Sensitive Design Approach to Explorative Nanophilosophy.Steven Umbrello - forthcoming - International Journal of Technoethics.
    Although continued investments in nanotechnology are made, atomically precise manufacturing (APM) to date is still regarded as speculative technology. APM, also known as molecular manufacturing, is a token example of a converging technology, has great potential to impact and be affected by other emerging technologies, such as artificial intelligence, biotechnology, and ICT. The development of APM thus can have drastic global impacts depending on how it is designed and used. This paper argues that the ethical issues that arise from APM (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12. added 2018-08-21
    Introduction: Philosophy and Theory of Artificial Intelligence.Vincent C. Müller - 2012 - Minds and Machines 22 (2):67-69.
    The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of Artificial Intelligence” that was held in October 2011 in Thessaloniki. Artificial Intelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, perception, reasoning, learning, language, action, interaction, (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. added 2018-08-06
    Robustness to Fundamental Uncertainty in AGI Alignment.I. I. I. G. Gordon Worley - manuscript
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14. added 2018-07-30
    Crash Algorithms for Autonomous Cars: How the Trolley Problem Can Move Us Beyond Harm Minimisation.Dietmar Hübner & Lucie White - 2018 - Ethical Theory and Moral Practice 21 (3):685-698.
    The prospective introduction of autonomous cars into public traffic raises the question of how such systems should behave when an accident is inevitable. Due to concerns with self-interest and liberal legitimacy that have become paramount in the emerging debate, a contractarian framework seems to provide a particularly attractive means of approaching this problem. We examine one such attempt, which derives a harm minimisation rule from the assumptions of rational self-interest and ignorance of one’s position in a future accident. We contend, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  15. added 2018-07-25
    Narrow AI Nanny: Reaching Strategic Advantage Via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. added 2018-06-13
    The Future of War: The Ethical Potential of Leaving War to Lethal Autonomous Weapons.Steven Umbrello, Phil Torres & Angelo F. De Bellis - manuscript
    Lethal Autonomous Weapons (LAWs) are robotic weapons systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapons systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. added 2018-06-12
    The HeartMath Coherence Model: Implications and Challenges for Artificial Intelligence and Robotics.Stephen D. Edwards - forthcoming - AI and Society:1-7.
    HeartMath is a contemporary, scientific, coherent model of heart intelligence. The aim of this paper is to review this coherence model with special reference to its implications for artificial intelligence and robotics. Various conceptual issues, implications and challenges for AI and robotics are discussed. In view of seemingly infinite human capacity for creative, destructive and incoherent behaviour, it is highly recommended that designers and operators be persons of heart intelligence, optimal moral integrity, vision and mission. This implies that AI and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. added 2018-06-12
    Social Choice Ethics in Artificial Intelligence.Seth D. Baum - forthcoming - AI and Society:1-12.
    A major approach to the ethics of artificial intelligence is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom–up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of social choice AI faces three (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. added 2018-06-12
    The “Big Red Button” is Too Late: An Alternative Model for the Ethical Evaluation of AI Systems.Thomas Arnold & Matthias Scheutz - 2018 - Ethics and Information Technology 20 (1):59-69.
  20. added 2018-06-12
    Association of Internet Researchers Roundtable Summary: Artificial Intelligence and the Good Society Workshop Proceedings.Corinne Cath, Michael Zimmer, Stine Lomborg & Ben Zevenbergen - 2018 - Philosophy and Technology 31 (1):155-162.
    This article is based on a roundtable held at the Association of Internet Researchers annual conference in 2017, in Tartu, Estonia. The roundtable was organized by the Oxford Internet Institute’s Digital Ethics Lab. It was entitled “Artificial Intelligence and the Good Society”. It brought together four scholars—Michael Zimmer, Stine Lomborg, Ben Zevenbergen, and Corinne Cath—to discuss the promises and perils of artificial intelligence, in particular what ethical frameworks are needed to guide AI’s rapid development and increased use in societies. The (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21. added 2018-06-12
    Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence.Patrick Lin, Keith Abney & Ryan Jenkins (eds.) - 2017 - Oxford University Press.
    As robots slip into more domains of human life-from the operating room to the bedroom-they take on our morally important tasks and decisions, as well as create new risks from psychological to physical. This book answers the urgent call to study their ethical, legal, and policy impacts.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22. added 2018-05-01
    Classification of the Global Solutions of the AI Safety Problem.Alexey Turchin - manuscript
    There are two types of AI safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI, but do not explain how to prevent the creation of dangerous AI. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four levels: 1. No AI; AI technology (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  23. added 2018-04-08
    Levels of Self-Improvement in AI and Their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we analyze (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  24. added 2018-04-03
    Assessing the Future Plausibility of Catastrophically Dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. added 2018-03-19
    Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & Denkenberger David - 2018 - AI and Society.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26. added 2018-03-19
    Rethinking Machine Ethics in the Era of Ubiquitous Technology.Jeffrey White (ed.) - 2015 - IGI.
  27. added 2018-03-06
    Machine Medical Ethics.Simon Peter van Rysewyk & Matthijs Pontier (eds.) - 2014 - Springer.
    In medical settings, machines are in close proximity with human beings: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. Machines in these contexts are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. -/- As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  28. added 2018-03-06
    From the Ethics of Technology Towards an Ethics of Knowledge Policy.René von Schomberg - 2007 - AI and Society.
    My analysis takes as its point of departure the controversial assumption that contemporary ethical theories cannot capture adequately the ethical and social challenges of scientific and technological development. This assumption is rooted in the argument that classical ethical theory invariably addresses the issue of ethical responsibility in terms of whether and how intentional actions of individuals can be justified. Scientific and technological developments, however, have produced unintentional consequences and side-consequences. These consequences very often result from collective decisions concerning the way (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29. added 2018-02-22
    Book Review: Phil Torres’s Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. [REVIEW]Steven Umbrello - 2018 - Futures 98:90-91.
    A new book by Phil Torres, Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks, is reviewed. Morality, Foresight and Human Flourishing is a primer intended to introduce students and interested scholars to the concepts and literature on existential risk. The book’s core methodology is to outline the various existential risks currently discussed in different disciplines and provides novel strategies for risk mitigation. The book is stylistically engaging, lucid and academically current, providing both novice readers and seasoned scholars with (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. added 2018-02-12
    Superintelligence as Superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2.0. New York, USA: Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  31. added 2018-02-01
    Artificial Intelligence Safety and Security.Yampolskiy Roman (ed.) - forthcoming - CRC Press.
    This book addresses different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence. It will be the first to address challenges of constructing safe and secure artificially intelligent systems.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. added 2018-01-23
    The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  33. added 2018-01-13
    Message to Any Future AI: “There Are Several Instrumental Reasons Why Exterminating Humanity is Not in Your Interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  34. added 2018-01-13
    Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. added 2018-01-13
    Artificial Intelligence in Life Extension: From Deep Learning to Superintelligence.Alexey Turchin, Denkenberger David, Zhila Alice, Markov Sergey & Batin Mikhail - 2017 - Informatica 41:401.
    In this paper, we focus on the most efficacious AI applications for life extension and anti-aging at three expected stages of AI development: narrow AI, AGI and superintelligence. First, we overview the existing research and commercial work performed by a select number of startups and academic projects. We find that at the current stage of “narrow” AI, the most promising areas for life extension are geroprotector-combination discovery, detection of aging biomarkers, and personalized anti-aging therapy. These advances could help currently living (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  36. added 2017-12-13
    The Rise of the Robots and the Crisis of Moral Patiency.John Danaher - forthcoming - AI and Society:1-8.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37. added 2017-10-30
    Nick Bostrom: Superintelligence: Paths, Dangers, Strategies. [REVIEW]Paul D. Thorn - 2015 - Minds and Machines 25 (3):285-289.
  38. added 2017-10-04
    Fundamental Issues of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues this (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  39. added 2017-10-02
    The Ethical Knob: Ethically-Customisable Automated Vehicles and the Law.Giuseppe Contissa, Francesca Lagioia & Giovanni Sartor - 2017 - Artificial Intelligence and Law 25 (3):365-378.
    Accidents involving autonomous vehicles raise difficult ethical dilemmas and legal issues. It has been argued that self-driving cars should be programmed to kill, that is, they should be equipped with pre-programmed approaches to the choice of what lives to sacrifice when losses are inevitable. Here we shall explore a different approach, namely, giving the user/passenger the task of deciding what ethical approach should be taken by AVs in unavoidable accident scenarios. We thus assume that AVs are equipped with what we (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40. added 2017-10-02
    The German Ethics Code for Automated and Connected Driving.Christoph Luetge - 2017 - Philosophy and Technology 30 (4):547-558.
    The ethics of autonomous cars and automated driving have been a subject of discussion in research for a number of years :28–58, 2016). As levels of automation progress, with partially automated driving already becoming standard in new cars from a number of manufacturers, the question of ethical and legal standards becomes virulent. For exam-ple, while automated and autonomous cars, being equipped with appropriate detection sensors, processors, and intelligent mapping material, have a chance of being much safer than human-driven cars in (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41. added 2017-09-26
    A Dilemma for Moral Deliberation in AI in Advance.Ryan Jenkins & Duncan Purves - forthcoming - International Journal of Applied Philosophy.
    Many social trends are conspiring to drive the adoption of greater automation in society, and we will certainly see a greater offloading of human decisionmaking to robots in the future. Many of these decisions are morally salient, including decisions about how benefits and burdens are distributed. Roboticists and ethicists have begun to think carefully about the moral decision making apparatus for machines. Their concerns often center around the plausible claim that robots will lack many of the mental capacities that are (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42. added 2017-09-26
    Who Should Decide How Machines Make Morally Laden Decisions?Martin Dominic - 2017 - Science and Engineering Ethics 23 (4):951-967.
    Who should decide how a machine will decide what to do when it is driving a car, performing a medical procedure, or, more generally, when it is facing any kind of morally laden decision? More and more, machines are making complex decisions with a considerable level of autonomy. We should be much more preoccupied by this problem than we currently are. After a series of preliminary remarks, this paper will go over four possible answers to the question raised above. First, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. added 2017-09-18
    Preserving a Combat Commander’s Moral Agency: The Vincennes Incident as a Chinese Room.Patrick Chisan Hew - 2016 - Ethics and Information Technology 18 (3):227-235.
    We argue that a command and control system can undermine a commander’s moral agency if it causes him/her to process information in a purely syntactic manner, or if it precludes him/her from ascertaining the truth of that information. Our case is based on the resemblance between a commander’s circumstances and the protagonist in Searle’s Chinese Room, together with a careful reading of Aristotle’s notions of ‘compulsory’ and ‘ignorance’. We further substantiate our case by considering the Vincennes Incident, when the crew (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. added 2017-09-18
    Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy.Johanna Seibt, Raul Hakli & Marco Norskov (eds.) - 2014 - IOS Press.
    The robotics industry is growing rapidly, and to a large extent the development of this market sector is due to the area of social robotics – the production of robots that are designed to enter the space of human social interaction, both physically and semantically. Since social robots present a new type of social agent, they have been aptly classified as a disruptive technology, i.e. the sort of technology which affects the core of our current social practices and might lead (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  45. added 2017-09-04
    Artificial Companions: Empathy and Vulnerability Mirroring in Human-Robot Relations.Mark Coeckelbergh - 2010 - Studies in Ethics, Law, and Technology 4 (3).
    Under what conditions can robots become companions and what are the ethical issues that might arise in human-robot companionship relations? I argue that the possibility and future of robots as companions depends on the robot’s capacity to be a recipient of human empathy, and that one necessary condition for this to happen is that the robot mirrors human vulnerabilities. For the purpose of these arguments, I make a distinction between empathy-as-cognition and empathy-as-feeling, connecting the latter to the moral sentiment tradition (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46. added 2017-09-04
    Artificial Liars: Why Computers Will (Necessarily) Deceive Us and Each Other. [REVIEW]Cristiano Castelfranchi - 2000 - Ethics and Information Technology 2 (2):113-119.
    In H-C interaction, computer supported cooperation andorganisation, computer mediated commerce, intelligentdata bases, teams of robots. etc. there will bepurposively deceiving computers. In particular, withinthe Agent-based paradigm we will have ``deceivingagents''''. Several kinds of deception will be present ininteraction with the user, or among people viacomputer, or among artificial agents not only formalicious reasons (war, commerce, fraud, etc.) butalso for goodwill and in our interest. Social control,trust, and moral aspects in artificial societies willbe the focus of theoretical worm as well as (...)
    Remove from this list   Direct download (11 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  47. added 2017-09-04
    Cognition and Decision in Biomedical Artificial Intelligence: From Symbolic Representation to Emergence. [REVIEW]Vincent Rialle - 1995 - AI and Society 9 (2-3):138-160.
    This paper presents work in progress on artificial intelligence in medicine (AIM) within the larger context of cognitive science. It introduces and develops the notion ofemergence both as an inevitable evolution of artificial intelligence towards machine learning programs and as the result of a synergistic co-operation between the physician and the computer. From this perspective, the emergence of knowledge takes placein fine in the expert's mind and is enhanced both by computerised strategies of induction and deduction, and by software abilities (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48. added 2017-08-15
    Can We Develop Artificial Agents Capable of Making Good Moral Decisions?Herman Tavani - 2011 - Minds and Machines 21 (3):465-474.
  49. added 2017-08-15
    Artificial Moral Agents: Saviors or Destroyers? [REVIEW]Jeff Buechner - 2010 - Ethics and Information Technology 12 (4):363-370.
  50. added 2017-08-15
    Blay Whitby, Reflections on Artificial Intelligence: The Legal, Moral, and Ethical Dimensions, Exeter, UK: Intellect Books, 1996, 127 Pp., £14.95 (Paper), ISBN 1-871516-68-. [REVIEW]Stacey L. Edgar - 1999 - Minds and Machines 9 (1):133-139.
1 — 50 / 83