Related categories
Siblings:
149 found
Search inside:
(import / add options)   Order:
1 — 50 / 149
  1. Alison Adam (2005). Delegating and distributing morality: Can we inscribe privacy protection in a machine? [REVIEW] Ethics and Information Technology 7 (4):233-242.
    This paper addresses the question of delegation of morality to a machine, through a consideration of whether or not non-humans can be considered to be moral. The aspect of morality under consideration here is protection of privacy. The topic is introduced through two cases where there was a failure in sharing and retaining personal data protected by UK data protection law, with tragic consequences. In some sense this can be regarded as a failure in the process of delegating morality to (...)
    Remove from this list  
    Translate
      Direct download (10 more)  
     
    Export citation  
     
    My bibliography   5 citations  
  2. M. Anderson, S. L. Anderson & C. Armen (eds.) (2005). Association for the Advancement of Artificial Intelligence Fall Symposium Technical Report.
    Remove from this list  
    Translate
     
     
    Export citation  
     
    My bibliography  
  3. Michael Anderson & Susan Leigh Anderson (2007). The Status of Machine Ethics: A Report From the AAAI Symposium. [REVIEW] Minds and Machines 17 (1):1-10.
    This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
    Remove from this list   Direct download (10 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  4. Susan Leigh Anderson (2011). Machine Metaethics. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press
  5. Susan Leigh Anderson (2011). Once People Understand That Machine Ethics is Concerned with How Intelligent Machines Should Behave, They Often Maintain That Isaac Asimov has Already Given Us an Ideal Set of Rules for Such Machines. They Have in Mind Asimov's Three Laws of Robotics: 1. A Robot May Not Injure a Human Being, or, Through Inaction, Allow a Human. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press
  6. Susan Leigh Anderson (2011). Philosophical Concerns with Machine Ethics. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press
  7. Susan Leigh Anderson (2008). Asimov's “Three Laws of Robotics” and Machine Metaethics. AI and Society 22 (4):477-493.
    Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   6 citations  
  8. Susan Leigh Anderson & Michael Anderson (2011). A Prima Facie Duty Approach to Machine Ethics Machine Learning of Features of Ethical Dilemmas, Prima Facie Duties, and Decision Principles Through a Dialogue with Ethicists. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press
  9. Susan Anderson & Michael Anderson (eds.) (2011). Machine Ethics. Cambridge University Press.
    The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   12 citations  
  10. Stuart Armstrong, Anders Sandberg & Nick Bostrom (2012). Thinking Inside the Box: Controlling and Using an Oracle AI. Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  11. Peter M. Asaro (2006). What Should We Want From a Robot Ethic. International Review of Information Ethics 6 (12):9-16.
    There are at least three things we might mean by "ethics in robotics": the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a continuum of agency that lies (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   10 citations  
  12. Hutan Ashrafian (2015). AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics. Science and Engineering Ethics 21 (1):29-40.
    The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human harm, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  13. Hutan Ashrafian (2015). Artificial Intelligence and Robot Responsibilities: Innovating Beyond Rights. Science and Engineering Ethics 21 (2):317-326.
    The enduring innovations in artificial intelligence and robotics offer the promised capacity of computer consciousness, sentience and rationality. The development of these advanced technologies have been considered to merit rights, however these can only be ascribed in the context of commensurate responsibilities and duties. This represents the discernable next-step for evolution in this field. Addressing these needs requires attention to the philosophical perspectives of moral responsibility for artificial intelligence and robotics. A contrast to the moral status of animals may be (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  14. Phil Badger (2014). The Morality Machine. Philosophy Now 104:24-27.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  15. William Sims Bainbridge (2012). Whole-Personality Emulation. International Journal of Machine Consciousness 4 (01):159-175.
  16. Parthasarathi Banerjee (2007). Technology of Culture: The Roadmap of a Journey Undertaken. [REVIEW] AI and Society 21 (4):411-419.
    Artificial intelligence (AI) impacts society and an individual in many subtler and deeper ways than machines based upon the physics and mechanics of descriptive objects. The AI project involves thus culture and provides scope to liberational undertakings. Most importantly AI implicates human ethical and attitudinal bearings. This essay explores how previous authors in this journal have explored related issues and how such discourses have provided to the present world a roadmap that can be followed to engage in discourses with ethical (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  17. Fiorella Battaglia & Nikil Mukerji (2015). Technikethik. In Julian Nida-Rümelin, Irina Spiegel & Markus Tiedemann (eds.), Handbuch Philosophie und Ethik - Band 2: Disziplinen und Themen. UTB 288-295.
  18. Fiorella Battaglia, Nikil Mukerji & Julian Nida-Rümelin (2014). Science, Technology, and Responsibility. In Fiorella Battaglia, Nikil Mukerji & Julian Nida-Rümelin (eds.), Rethinking Responsibility in Science and Technology. Pisa University Press 7-11.
    The empirical circumstances in which human beings ascribe responsibility to one another are subject to change. Science and technology play a great part in this transformation process. Therefore, it is important for us to rethink the idea, the role and the normative standards behind responsibility in a world that is constantly changing under the influence of scientific and technological progress. This volume is a contribution to that joint societal effort.
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  19. Anthony F. Beavers, What Can A Robot Teach Us About Kantian Ethics?," in Process".
    In this paper, I examine a variety of agents that appear in Kantian ethics in order to determine which would be necessary to make a robot a genuine moral agent. However, building such an agent would require that we structure into a robot’s behavioral repertoire the possibility for immoral behavior, for only then can the moral law, according to Kant, manifest itself as an ought, a prerequisite for being able to hold an agent morally accountable for its actions. Since building (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  20. Anthony F. Beavers (forthcoming). Moral Machines and the Threat of Ethical Nihilism. In Patrick Lin, George Bekey & Keith Abney (eds.), Robot Ethics: The Ethical and Social Implication of Robotics.
    In his famous 1950 paper where he presents what became the benchmark for success in artificial intelligence, Turing notes that "at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted" (Turing 1950, 442). Kurzweil (1990) suggests that Turing's prediction was correct, even if no machine has yet to pass the Turing Test. In the wake of the (...)
    Remove from this list  
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  21. Barbara Becker (2006). Social Robots-Emotional Agents: Some Remarks on Naturalizing Man-Machine Interaction. International Review of Information Ethics 6:37-45.
    The construction of embodied conversational agents - robots as well as avatars - seem to be a new challenge in the field of both cognitive AI and human-computer-interface development. On the one hand, one aims at gaining new insights in the development of cognition and communication by constructing intelligent, physical instantiated artefacts. On the other hand people are driven by the idea, that humanlike mechanical dialog-partners will have a positive effect on human-machine-communication. In this contribution I put for discussion whether (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   2 citations  
  22. Paul Bello & Selmer Bringsjord (2013). On How to Build a Moral Machine. Topoi 32 (2):251-266.
    Herein we make a plea to machine ethicists for the inclusion of constraints on their theories consistent with empirical data on human moral cognition. As philosophers, we clearly lack widely accepted solutions to issues regarding the existence of free will, the nature of persons and firm conditions on moral agency/patienthood; all of which are indispensable concepts to be deployed by any machine able to make moral judgments. No agreement seems forthcoming on these matters, and we don’t hold out hope for (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    My bibliography  
  23. Oliver Bendel (2016). Considerations About the Relationship Between Animal and Machine Ethics. AI and Society 31 (1):103-108.
  24. Ashok J. Bharucha, Alex John London, David Barnard, Howard Wactlar, Mary Amanda Dew & Charles F. Reynolds (2006). Ethical Considerations in the Conduct of Electronic Surveillance Research. Journal of Law, Medicine & Ethics 34 (3):611-619.
    The extant clinical literature indicates profound problems in the assessment, monitoring, and documentation of care in long-term care facilities. The lack of adequate resources to accommodate higher staff-to-resident ratios adds additional urgency to the goal of identifying more costeffective mechanisms to provide care oversight. The ever expanding array of electronic monitoring technologies in the clinical research arena demands a conceptual and pragmatic framework for the resolution of ethical tensions inherent in the use of such innovative tools. CareMedia is a project (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  25. Russell Blackford & Damien Broderick (eds.) (2014). Intelligence Unbound: The Future of Uploaded and Machine Minds. Wiley-Blackwell.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  26. Russell Blackford & Damien Broderick (eds.) (2014). Intelligence Unbound: The Future of Uploaded and Machine Minds. Wiley-Blackwell.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  27. Russell Blackford & Damien Broderick (eds.) (2014). Intelligence Unbound: The Future of Uploaded and Machine Minds. Wiley-Blackwell.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  28. Whitby Blay (2013). When is Any Agent a Moral Agent?: Reflections on Machine Consciousness and Moral Agency. International Journal of Machine Consciousness 5 (1).
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  29. Magnus Boman (1999). Norms in Artificial Decision Making. Artificial Intelligence and Law 7 (1):17-35.
    A method for forcing norms onto individual agents in a multi-agent system is presented. The agents under study are supersoft agents: autonomous artificial agents programmed to represent and evaluate vague and imprecise information. Agents are further assumed to act in accordance with advice obtained from a normative decision module, with which they can communicate. Norms act as global constraints on the evaluations performed in the decision module and hence no action that violates a norm will be suggested to any agent. (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  30. Selmer Bringsjord (2007). Ethical Robots: The Future Can Heed Us. [REVIEW] AI and Society 22 (4):539-550.
    Bill Joy’s deep pessimism is now famous. Why the Future Doesn’t Need Us, his defense of that pessimism, has been read by, it seems, everyone—and many of these readers, apparently, have been converted to the dark side, or rather more accurately, to the future-is-dark side. Fortunately (for us; unfortunately for Joy), the defense, at least the part of it that pertains to AI and robotics, fails. Ours may be a dark future, but we cannot know that on the basis of (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  31. Selmer Bringsjord, Joshua Taylor, Bram van Heuveln, Konstantine Arkoudas, Micah Clark & Ralph Wojtowicz (2011). Piagetian Roboethics Via Category Theory Moving Beyond Mere Formal Operations to Engineer Robots Whose Decisions Are Guaranteed to Be Ethically Correct. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  32. David J. Calverley (2011). To Some, the Question of Whether Legal Rights Should, or Even. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press 213.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  33. David J. Calverley (2007). Imagining a Non-Biological Machine as a Legal Person. AI and Society 22 (4):523-537.
    As non-biological machines come to be designed in ways which exhibit characteristics comparable to human mental states, the manner in which the law treats these entities will become increasingly important both to designers and to society at large. The direct question will become whether, given certain attributes, a non-biological machine could ever be viewed as a legal person. In order to begin to understand the ramifications of this question, this paper starts by exploring the distinction between the related concepts of (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  34. Ginevra Castellano & Christopher Peters (2010). Socially Perceptive Robots: Challenges and Concerns. Interaction Studies 11 (2):201-207.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  35. Mar Cello Guarim (2011). Computational Neural Modeling and the Philosophy of Ethics Reflections on the Particularism-Generalism Debate. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press
  36. Anthony Chemero, Ascribing Moral Value and the Embodied Turing Test.
    What would it take for an artificial agent to be treated as having moral value? As a first step toward answering this question, we ask what it would take for an artificial agent to be capable of the sort of autonomous, adaptive social behavior that is characteristic of the animals that humans interact with. We propose that this sort of capacity is best measured by what we call the Embodied Turing Test. The Embodied Turing test is a test in which (...)
    Remove from this list  
    Translate
     
     
    Export citation  
     
    My bibliography  
  37. Mark Coeckelbergh (2013). David J. Gunkel: The Machine Question: Critical Perspectives on AI, Robots, and Ethics. [REVIEW] Ethics and Information Technology 15 (3):235-238.
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    My bibliography  
  38. Mark Coeckelbergh (2009). Virtual Moral Agency, Virtual Moral Responsibility: On the Moral Significance of the Appearance, Perception, and Performance of Artificial Agents. [REVIEW] AI and Society 24 (2):181-189.
  39. Jennifer C. Cook (2006). Machine and Metaphor: The Ethics of Language in American Realism. Routledge.
    American literary realism burgeoned during a period of tremendous technological innovation. Because the realists evinced not only a fascination with this new technology but also an ethos that seems to align itself with science, many have paired the two fields rather unproblematically. But this book demonstrates that many realist writers, from Mark Twain to Stephen Crane, Charles W. Chesnutt to Edith Wharton, felt a great deal of anxiety about the advent of new technologies – precisely at the crucial intersection of (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  40. Roberto Cordeschi & Guglielmo Tamburrini (2005). Intelligent Machines and Warfare: Historical Debates and Epistemologically Motivated Concerns. In L. Magnani (ed.), European Computing and Philosophy Conference (ECAP 2004). College Publications
    The early examples of self-directing robots attracted the interest of both scientific and military communities. Biologists regarded these devices as material models of animal tropisms. Engineers envisaged the possibility of turning self-directing robots into new “intelligent” torpedoes during World War I. Starting from World War II, more extensive interactions developed between theoretical inquiry and applied military research on the subject of adaptive and intelligent machinery. Pioneers of Cybernetics were involved in the development of goal-seeking warfare devices. But collaboration occasionally turned (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  41. Hilde Corneliussen (2005). ‘I Fell in Love with the Machine’ Women’s Pleasure in Computing. Journal of Information, Communication and Ethics in Society 3 (4):233-241.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  42. Emad Abdel Rahim Dahiyat (2007). Intelligent Agents and Contracts: Is a Conceptual Rethink Imperative? [REVIEW] Artificial Intelligence and Law 15 (4):375-390.
    The emergence of intelligent software agents that operate autonomously with little or no human intervention has generated many doctrinal questions at a conceptual level and has challenged the traditional rules of contract especially those relating to the intention as an essential requirement of any contract conclusion. In this paper, we will try to explore some of these challenges, and shed light on the conflict between the traditional contract theory and the transactional practice in the case of using intelligent software agents. (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  43. John Danaher (forthcoming). Will Life Be Worth Living in a World Without Work? Technological Unemployment and the Meaning of Life. Science and Engineering Ethics:1-24.
    Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the (presumed) efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  44. John Danaher (2015). Why AI Doomsayers Are Like Sceptical Theists and Why It Matters. Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  45. P. A. Danielson (2011). Prototyping N-Reasons: A Computer Mediated Ethics Machine. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press 9.
  46. Peter Danielson (2010). Designing a Machine to Learn About the Ethics of Robotics: The N-Reasons Platform. [REVIEW] Ethics and Information Technology 12 (3):251-261.
    We can learn about human ethics from machines. We discuss the design of a working machine for making ethical decisions, the N-Reasons platform, applied to the ethics of robots. This N-Reasons platform builds on web based surveys and experiments, to enable participants to make better ethical decisions. Their decisions are better than our existing surveys in three ways. First, they are social decisions supported by reasons. Second, these results are based on weaker premises, as no exogenous expertise (aside from that (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  47. Paul B. de Laat (2015). Trusting the (Ro)Botic Other: By Assumption? SIGCAS Computers and Society 45 (3):255-260.
    How may human agents come to trust (sophisticated) artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet (ro)bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  48. Peter H. Denton (2014). Review of "The Machine Question: Critical Perspectives on AI, Robots, and Ethics". [REVIEW] Essays in Philosophy 15 (1):179-183.
  49. Ezio Di Nucci & Filippo Santoni de Sio (forthcoming). Who’s Afraid of Robots? Fear of Automation and the Ideal of Direct Control. In Fiorella Battaglia & Natalie Weidenfeld (eds.), Roboethics in Film. Pisa University Press
    We argue that lack of direct and conscious control is not, in principle, a reason to be afraid of machines in general and robots in particular: in order to articulate the ethical and political risks of increasing automation one must, therefore, tackle the difficult task of precisely delineating the theoretical and practical limits of sustainable delegation to robots.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  50. Gordana Dodig Crnkovic & Daniel Persson (2008). Sharing Moral Responsibility with Robots: A Pragmatic Approach. In Holst, Per Kreuger & Peter Funk (eds.), Frontiers in Artificial Intelligence and Applications Volume 173. IOS Press Books
    Roboethics is a recently developed field of applied ethics which deals with the ethical aspects of technologies such as robots, ambient intelligence, direct neural interfaces and invasive nano-devices and intelligent soft bots. In this article we look specifically at the issue of (moral) responsibility in artificial intelligent systems. We argue for a pragmatic approach, where responsibility is seen as a social regulatory mechanism. We claim that having a system which takes care of certain tasks intelligently, learning from experience and making (...)
    Remove from this list  
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography   3 citations  
1 — 50 / 149