This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related categories

77 found
Order:
1 — 50 / 77
  1. The “Big Red Button” is Too Late: An Alternative Model for the Ethical Evaluation of AI Systems.Thomas Arnold & Matthias Scheutz - 2018 - Ethics and Information Technology 20 (1):59-69.
  2. Can Artificial Intelligences Suffer From Mental Illness? A Philosophical Matter to Consider.Hutan Ashrafian - 2017 - Science and Engineering Ethics 23 (2):403-412.
    The potential for artificial intelligences and robotics in achieving the capacity of consciousness, sentience and rationality offers the prospect that these agents have minds. If so, then there may be a potential for these minds to become dysfunctional, or for artificial intelligences and robots to suffer from mental illness. The existence of artificially intelligent psychopathology can be interpreted through the philosophical perspectives of mental illness. This offers new insights into what it means to have either robot or human mental disorders, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  3. Social Choice Ethics in Artificial Intelligence.Seth D. Baum - forthcoming - AI and Society:1-12.
    A major approach to the ethics of artificial intelligence is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom–up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of social choice AI faces three (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  4. AAAI: An Argument Against Artificial Intelligence.Sander Beckers - 2018 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  5. Ethical and Legal Issues in the Use of Health Information Technology to Improve Patient Safety.S. Berner Eta - 2008 - HEC Forum 20 (3):243-258.
    There are a variety of ethical and legal issues that arise with the growing use of health information technology in clinical settings. While privacy and confidentiality of information is an important consideration in any electronic system, some of the issues related to using these systems to improve patient safety include changes to the standard of care in regard to using electronic rather than paper medical records, user training, and assuring accurate information is in the medical record and provided to users. (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  6. Intelligent Agents in Military, Defense and Warfare: Ethical Issues and Concerns.Mr Sahon Bhattacharyya - unknown
    Due to tremendous progress in digital electronics now intelligent and autonomous agents are gradually being adopted into the fields and domains of the military, defense and warfare. This paper tries to explore some of the inherent ethical issues, threats and some remedial issues about the impact of such systems on human civilization and existence in general. This paper discusses human ethics in contrast to machine ethics and the problems caused by non-sentient agents. A systematic study is made on paradoxes regarding (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    My bibliography  
  7. Intelligence Unbound: The Future of Uploaded and Machine Minds.Russell Blackford & Damien Broderick (eds.) - 2014 - Wiley-Blackwell.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  8. Ethical Issues in Advanced Artificial Intelligence.Nick Bostrom - manuscript
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   4 citations  
  9. When Machines Outsmart Humans.Nick Bostrom - manuscript
    Artificial intelligence is a possibility that should not be ignored in any serious thinking about the future, and it raises many profound issues for ethics and public policy that philosophers ought to start thinking about. This article outlines the case for thinking that human-level machine intelligence might well appear within the next half century. It then explains four immediate consequences of such a development, and argues that machine intelligence would have a revolutionary impact on a wide range of the social, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   1 citation  
  10. Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.Nick Bostrom - 2002 - Journal of Evolution and Technology 9.
    Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the propects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    My bibliography   7 citations  
  11. Fast, Cheap & Out of Control.Rodney A. Brooks - 1999 - Sony Pictures Classics Weta-Tv.
    Complex systems and complex missions take years of planning and force launches to become incredibly expensive. The longer the planning and the more expensive the mission, the more catastrophic if it fails. The solution has always been to plan better, add redundancy, test thoroughly and use high quality components. Based on our experience in building ground based mobile robots (legged and wheeled) we argue here for cheap, fast missions using large numbers of mass produced simple autonomous robots that are small (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  12. Artificial Moral Agents: Saviors or Destroyers? [REVIEW]Jeff Buechner - 2010 - Ethics and Information Technology 12 (4):363-370.
  13. R.U.R. - Rossum’s Universal Robots.Karel Čapek - 1920 - Aventinum.
    The play begins in a factory that makes artificial people, called roboti (robots), from synthetic organic matter. They seem happy to work for humans at first, but that changes, and a hostile robot rebellion leads to the extinction of the human race.
    Remove from this list  
    Translate
     
     
    Export citation  
     
    My bibliography  
  14. Artificial Liars: Why Computers Will (Necessarily) Deceive Us and Each Other. [REVIEW]Cristiano Castelfranchi - 2000 - Ethics and Information Technology 2 (2):113-119.
    In H-C interaction, computer supported cooperation andorganisation, computer mediated commerce, intelligentdata bases, teams of robots. etc. there will bepurposively deceiving computers. In particular, withinthe Agent-based paradigm we will have ``deceivingagents''''. Several kinds of deception will be present ininteraction with the user, or among people viacomputer, or among artificial agents not only formalicious reasons (war, commerce, fraud, etc.) butalso for goodwill and in our interest. Social control,trust, and moral aspects in artificial societies willbe the focus of theoretical worm as well as (...)
    Remove from this list   Direct download (11 more)  
     
    Export citation  
     
    My bibliography   4 citations  
  15. Association of Internet Researchers Roundtable Summary: Artificial Intelligence and the Good Society Workshop Proceedings.Corinne Cath, Michael Zimmer, Stine Lomborg & Ben Zevenbergen - 2018 - Philosophy and Technology 31 (1):155-162.
    This article is based on a roundtable held at the Association of Internet Researchers annual conference in 2017, in Tartu, Estonia. The roundtable was organized by the Oxford Internet Institute’s Digital Ethics Lab. It was entitled “Artificial Intelligence and the Good Society”. It brought together four scholars—Michael Zimmer, Stine Lomborg, Ben Zevenbergen, and Corinne Cath—to discuss the promises and perils of artificial intelligence, in particular what ethical frameworks are needed to guide AI’s rapid development and increased use in societies. The (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  16. Future Technologies, Dystopic Futures and the Precautionary Principle.Steve Clarke - 2005 - Ethics and Information Technology 7 (3):121-126.
    It is sometimes suggested that new research in such areas as artificial intelligence, nanotechnology and genetic engineering should be halted or otherwise restricted because of concerns about possible catastrophic scenarios. Proponents of such restrictions typically invoke the precautionary principle, understood as a tool of policy formulation, as part of their case. Here I examine the application of the precautionary principle to possible catastrophic scenarios. I argue, along with Sunstein (Risk and Reason: Safety, Law and the Environment. Cambridge University Press, Cambridge, (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   9 citations  
  17. The Tragedy of the Master: Automation, Vulnerability, and Distance.Mark Coeckelbergh - 2015 - Ethics and Information Technology 17 (3):219-229.
    Responding to long-standing warnings that robots and AI will enslave humans, I argue that the main problem we face is not that automation might turn us into slaves but, rather, that we remain masters. First I construct an argument concerning what I call ‘the tragedy of the master’: using the master–slave dialectic, I argue that automation technologies threaten to make us vulnerable, alienated, and automated masters. I elaborate the implications for power, knowledge, and experience. Then I critically discuss and question (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  18. Artificial Companions: Empathy and Vulnerability Mirroring in Human-Robot Relations.Mark Coeckelbergh - 2010 - Studies in Ethics, Law, and Technology 4 (3).
    Under what conditions can robots become companions and what are the ethical issues that might arise in human-robot companionship relations? I argue that the possibility and future of robots as companions depends on the robot’s capacity to be a recipient of human empathy, and that one necessary condition for this to happen is that the robot mirrors human vulnerabilities. For the purpose of these arguments, I make a distinction between empathy-as-cognition and empathy-as-feeling, connecting the latter to the moral sentiment tradition (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  19. The Ethical Knob: Ethically-Customisable Automated Vehicles and the Law.Giuseppe Contissa, Francesca Lagioia & Giovanni Sartor - 2017 - Artificial Intelligence and Law 25 (3):365-378.
    Accidents involving autonomous vehicles raise difficult ethical dilemmas and legal issues. It has been argued that self-driving cars should be programmed to kill, that is, they should be equipped with pre-programmed approaches to the choice of what lives to sacrifice when losses are inevitable. Here we shall explore a different approach, namely, giving the user/passenger the task of deciding what ethical approach should be taken by AVs in unavoidable accident scenarios. We thus assume that AVs are equipped with what we (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  20. The Rise of the Robots and the Crisis of Moral Patiency.John Danaher - forthcoming - AI and Society:1-8.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  21. Why AI Doomsayers Are Like Sceptical Theists and Why It Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  22. Did Hal Committ Murder?Daniel C. Dennett - 1997 - In D. Stork (ed.), Hal's Legacy: 2001's Computer As Dream and Reality. MIT Press.
    The first robot homicide was committed in 1981, according to my files. I have a yellowed clipping dated 12/9/81 from the Philadelphia Inquirer--not the National Enquirer--with the headline: Robot killed repairman, Japan reports The story was an anti-climax: at the Kawasaki Heavy Industries plant in Akashi, a malfunctioning robotic arm pushed a repairman against a gearwheel-milling machine, crushing him to death. The repairman had failed to follow proper instructions for shutting down the arm before entering the workspace. Why, indeed, had (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   5 citations  
  23. Fundamental Issues in Social Robotics.Brian R. Duffy - 2006 - International Review of Information Ethics 6 (12):2006.
    Man and machine are rife with fundamental differences. Formal research in artificial intelligence and robotics has for half a century aimed to cross this divide, whether from the perspective of understanding man by building models, or building machines which could be as intelligent and versatile as humans. Inevitably, our sources of inspiration come from what exists around us, but to what extent should a machine's conception be sourced from such biological references as ourselves? Machines designed to be capable of explicit (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   2 citations  
  24. Blay Whitby, Reflections on Artificial Intelligence: The Legal, Moral, and Ethical Dimensions, Exeter, UK: Intellect Books, 1996, 127 Pp., £14.95 (Paper), ISBN 1-871516-68-. [REVIEW]Stacey L. Edgar - 1999 - Minds and Machines 9 (1):133-139.
  25. The HeartMath Coherence Model: Implications and Challenges for Artificial Intelligence and Robotics.Stephen D. Edwards - forthcoming - AI and Society:1-7.
    HeartMath is a contemporary, scientific, coherent model of heart intelligence. The aim of this paper is to review this coherence model with special reference to its implications for artificial intelligence and robotics. Various conceptual issues, implications and challenges for AI and robotics are discussed. In view of seemingly infinite human capacity for creative, destructive and incoherent behaviour, it is highly recommended that designers and operators be persons of heart intelligence, optimal moral integrity, vision and mission. This implies that AI and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  26. Benchmarks for Evaluating Socially Assistive Robotics.David Feil-Seifer, Kristine Skinner & Maja J. Mataric - 2007 - Interaction Studies 8 (3):423-439.
  27. AI in Medicine: A Japanese Perspective. [REVIEW]Dr Toshiyuki Furukawa - 1990 - AI and Society 4 (3):196-213.
    This article is concerned with the history and current state of research activities into medical expert systems (MES) in Japan. A brief review of expert systems' work over the last ten years is provided and here is a discussion on future directions of artificial intelligence (AI) applications in medicine, which we expect the Japanese AI community in medicine (AIM) to undertake.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  28. Robustness to Fundamental Uncertainty in AGI Alignment.I. I. I. G. Gordon Worley - manuscript
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  29. Ethics and the Future of Spying: Technology, National Security and Intelligence Collection.Jai Galliott & Warren Reed (eds.) - 2016 - Routledge.
    This volume examines the ethical issues generated by recent developments in intelligence collection and offers a comprehensive analysis of the key legal, moral and social questions thereby raised. Intelligence officers, whether gatherers, analysts or some combination thereof, are operating in a sea of social, political, scientific and technological change. This book examines the new challenges faced by the intelligence community as a result of these changes. It looks not only at how governments employ spies as a tool of state and (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  30. Artificial Intelligence: Opportunities and Implications for the Future of Decision Making.U. K. Government & Office for Science - 2016
    Artificial intelligence has arrived. In the online world it is already a part of everyday life, sitting invisibly behind a wide range of search engines and online commerce sites. It offers huge potential to enable more efficient and effective business and government but the use of artificial intelligence brings with it important questions about governance, accountability and ethics. Realising the full potential of artificial intelligence and avoiding possible adverse consequences requires societies to find satisfactory answers to these questions. This report (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  31. Preserving a Combat Commander’s Moral Agency: The Vincennes Incident as a Chinese Room.Patrick Chisan Hew - 2016 - Ethics and Information Technology 18 (3):227-235.
    We argue that a command and control system can undermine a commander’s moral agency if it causes him/her to process information in a purely syntactic manner, or if it precludes him/her from ascertaining the truth of that information. Our case is based on the resemblance between a commander’s circumstances and the protagonist in Searle’s Chinese Room, together with a careful reading of Aristotle’s notions of ‘compulsory’ and ‘ignorance’. We further substantiate our case by considering the Vincennes Incident, when the crew (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  32. Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations.Johannes Himmelreich - 2018 - Ethical Theory and Moral Practice 21 (3):669-684.
    Trolley cases are widely considered central to the ethics of autonomous vehicles. We caution against this by identifying four problems. Trolley cases, given technical limitations, rest on assumptions that are in tension with one another. Furthermore, trolley cases illuminate only a limited range of ethical issues insofar as they cohere with a certain design framework. Furthermore, trolley cases seem to demand a moral answer when a political answer is called for. Finally, trolley cases might be epistemically problematic in several ways. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  33. Crash Algorithms for Autonomous Cars: How the Trolley Problem Can Move Us Beyond Harm Minimisation.Dietmar Hübner & Lucie White - 2018 - Ethical Theory and Moral Practice 21 (3):685-698.
    The prospective introduction of autonomous cars into public traffic raises the question of how such systems should behave when an accident is inevitable. Due to concerns with self-interest and liberal legitimacy that have become paramount in the emerging debate, a contractarian framework seems to provide a particularly attractive means of approaching this problem. We examine one such attempt, which derives a harm minimisation rule from the assumptions of rational self-interest and ignorance of one’s position in a future accident. We contend, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  34. A Dilemma for Moral Deliberation in AI in Advance.Ryan Jenkins & Duncan Purves - forthcoming - International Journal of Applied Philosophy.
    Many social trends are conspiring to drive the adoption of greater automation in society, and we will certainly see a greater offloading of human decisionmaking to robots in the future. Many of these decisions are morally salient, including decisions about how benefits and burdens are distributed. Roboticists and ethicists have begun to think carefully about the moral decision making apparatus for machines. Their concerns often center around the plausible claim that robots will lack many of the mental capacities that are (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  35. Against Leben's Rawlsian Collision Algorithm for Autonomous Vehicles.Geoff Keeling - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Springer. pp. 259-272.
    Suppose that an autonomous vehicle encounters a situation where (i) imposing a risk of harm on at least one person is unavoidable; and (ii) a choice about how to allocate risks of harm between different persons is required. What does morality require in these cases? Derek Leben defends a Rawlsian answer to this question. I argue that we have reason to reject Leben’s answer.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  36. Artificial Decision-Making and Artificial Ethics: A Management Concern. [REVIEW]Omar E. M. Khalil - 1993 - Journal of Business Ethics 12 (4):313 - 321.
    Expert systems are knowledge-based information systems which are expected to have human attributes in order to replicate human capacity in ethical decision making. An expert system functions by virtue of its information, its inferential rules, and its decision criteria, each of which may be problematic. This paper addresses three basic reasons for ethical concern when using the currently available expert systems in a decisions-making capacity. These reasons are (1) expert systems' lack of human intelligence, (2) expert systems' lack of emotions (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  37. Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence.Patrick Lin, Keith Abney & Ryan Jenkins (eds.) - 2017 - Oxford University Press.
    As robots slip into more domains of human life-from the operating room to the bedroom-they take on our morally important tasks and decisions, as well as create new risks from psychological to physical. This book answers the urgent call to study their ethical, legal, and policy impacts.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  38. The German Ethics Code for Automated and Connected Driving.Christoph Luetge - 2017 - Philosophy and Technology 30 (4):547-558.
    The ethics of autonomous cars and automated driving have been a subject of discussion in research for a number of years :28–58, 2016). As levels of automation progress, with partially automated driving already becoming standard in new cars from a number of manufacturers, the question of ethical and legal standards becomes virulent. For exam-ple, while automated and autonomous cars, being equipped with appropriate detection sensors, processors, and intelligent mapping material, have a chance of being much safer than human-driven cars in (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  39. Who Should Decide How Machines Make Morally Laden Decisions?Martin Dominic - 2017 - Science and Engineering Ethics 23 (4):951-967.
    Who should decide how a machine will decide what to do when it is driving a car, performing a medical procedure, or, more generally, when it is facing any kind of morally laden decision? More and more, machines are making complex decisions with a considerable level of autonomy. We should be much more preoccupied by this problem than we currently are. After a series of preliminary remarks, this paper will go over four possible answers to the question raised above. First, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  40. Building Better Robots: Lessons From Observing Relationships Between Living Beings.Gail F. Melson - 2014 - Interaction Studiesinteraction Studies Social Behaviour and Communication in Biological and Artificial Systems 15 (2):173-179.
  41. Will Hominoids or Androids Destroy the Earth —A Review of How to Create a Mind by Ray Kurzweil (2012).Starks Michael - 2017 - In Michael Starks (ed.), Philosophy, Human Nature and the Collapse of Civilization -- Articles and Reviews 2006-2017 by Michael Starks 3rd Ed. 675p (2017). Henderson, NV USA: Michael Starks. pp. 668-675.
    Some years ago I reached the point where I can usually tell from the title of a book, or at least from the chapter titles, what kinds of philosophical mistakes will be made and how frequently. In the case of nominally scientific works these may be largely restricted to certain chapters which wax philosophical or try to draw general conclusions about the meaning or long term significance of the work. Normally however the scientific matters of fact are generously interlarded with (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  42. Fundamental Issues of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues this (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  43. Risks of General Intelligence.Vincent C. Müller (ed.) - 2015 - CRC Press - Chapman & Hall.
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  44. Introduction: Philosophy and Theory of Artificial Intelligence.Vincent C. Müller - 2012 - Minds and Machines 22 (2):67-69.
    The theory and philosophy of artificial intelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of Artificial Intelligence” that was held in October 2011 in Thessaloniki. Artificial Intelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, perception, reasoning, learning, language, action, interaction, (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  45. Doctor of Philosophy Thesis in Military Informatics (OpenPhD ) : Lethal Autonomy of Weapons is Designed and/or Recessive.Nyagudi Nyagudi Musandu - 2016-12-09 - Dissertation, OpenPhD (#Openphd) E.G. Wikiversity Https://En.Wikiversity.Org/Wiki/Doctor_of_Philosophy , Etc.
    My original contribution to knowledge is : Any weapon that exhibits intended and/or untended lethal autonomy in targeting and interdiction – does so by way of design and/or recessive flaw(s) in its systems of control – any such weapon is capable of war-fighting and other battle-space interaction in a manner that its Human Commander does not anticipate. Even with the complexity of Lethal Autonomy issues there is nothing particular to gain from being a low-tech Military. Lethal autonomous weapons are therefore (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    My bibliography  
  46. Decision Making, Computer Attitudes and Expert Systems: What is Our Direction? [REVIEW]Peter P. Mykytyn Jr - 1989 - AI and Society 3 (2):133-141.
    Expert systems have been concerned with applications dealing with medical diagnosis, mineral exploration, and computer configuration, with some efforts relatively successful in achieving results at least as good as human experts. Today, much is being written about these systems and managerial decision-making activities in organizations and the positive impact that they can have in these situations. However, it appears that expert systems could become somewhat of a panacea for some organizational ailments as research, development, and marketing of them continues at (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  47. Responsibility Practices and Unmanned Military Technologies.Merel Noorman - 2014 - Science and Engineering Ethics 20 (3):809-826.
    The prospect of increasingly autonomous military robots has raised concerns about the obfuscation of human responsibility. This papers argues that whether or not and to what extent human actors are and will be considered to be responsible for the behavior of robotic systems is and will be the outcome of ongoing negotiations between the various human actors involved. These negotiations are about what technologies should do and mean, but they are also about how responsibility should be interpreted and how it (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  48. Superintelligence as Superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2.0. New York, USA: Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  49. Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2018 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Berlin: Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    My bibliography  
  50. Autonomous Machines, Moral Judgment, and Acting for the Right Reasons.Duncan Purves, Ryan Jenkins & Bradley J. Strawser - 2015 - Ethical Theory and Moral Practice 18 (4):851-872.
    We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography   4 citations  
1 — 50 / 77