About this topic
Summary Machine ethics is about artificial moral agency. Machine ethicists wonder why people, human beings, other organisms, do what they do when they do it, what makes these things the right things to do, and how to articulate this process (ideally) in an independent artificial system (and not in a biological child, as an alternative). So, this category includes entries on agency, especially moral agency, and also on what it means to be an agent in general. On the empirical side, machine ethicists interpret rapidly advancing work in robotics and AI through traditional ethical frameworks, while in the other direction helping to frame robotics research in terms of ethical theory. For example, intelligent machines are (most) often modeled after biological systems, and in any event are often "made sense of" in terms of biological systems, so there is work in this process of interpretation and integration. More theoretical work wonders about the relative status afforded artificial agents given degrees of autonomy, origin, level of complexity, corporate-institutional and legal standing and so on. So understood, machine ethics is in the middle of a maelstrom of current research activity, with direct bearing on traditional ethics and with extensive popular implications as well. 
Key works Allen et al 2005Wallach et al 2007Tonkens 2012Tonkens 2009Müller & Bostrom 2014White 2013White 2015 
  Show all references
Related categories
Siblings:See also:
165 found
Search inside:
(import / add options)   Order:
1 — 50 / 165
  1. Colin Allen, Iva Smit & Wendell Wallach (2005). Artificial Morality: Top-Down, Bottom-Up, and Hybrid Approaches. [REVIEW] Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing (...)
    Remove from this list   Direct download (10 more)  
     
    Export citation  
     
    My bibliography   10 citations  
  2. Richard Alterman (2000). Rethinking Autonomy. Minds and Machines 10 (1):15-30.
    This paper explores the assumption of autonomy. Several arguments are presented against the assumption of runtime autonomy as a principle of design for artificial intelligence systems. The arguments vary from being theoretical, to practical, and to analytic. The latter parts of the paper focus on one strategy for building non-autonomous systems (the practice view). One critical theme is that intelligence is not located in the system alone, it emerges from a history of interactions among user, builder, and designer over a (...)
    Remove from this list   Direct download (8 more)  
     
    Export citation  
     
    My bibliography  
  3. David Leech Anderson (2012). Machine Intentionality, the Moral Status of Machines, and the Composition Problem. In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Springer 312-333.
    According to the most popular theories of intentionality, a family of theories we will refer to as “functional intentionality,” a machine can have genuine intentional states so long as it has functionally characterizable mental states that are causally hooked up to the world in the right way. This paper considers a detailed description of a robot that seems to meet the conditions of functional intentionality, but which falls victim to what I call “the composition problem.” One obvious way to escape (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  4. M. Anderson, S. L. Anderson & C. Armen (eds.) (2005). Association for the Advancement of Artificial Intelligence Fall Symposium Technical Report.
    Remove from this list  
    Translate
     
     
    Export citation  
     
    My bibliography  
  5. Michael Anderson & Susan Leigh Anderson (2007). The Status of Machine Ethics: A Report From the AAAI Symposium. [REVIEW] Minds and Machines 17 (1):1-10.
    This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
    Remove from this list   Direct download (10 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  6. Susan Leigh Anderson (2011). Machine Metaethics. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  7. Susan Leigh Anderson (2011). Once People Understand That Machine Ethics is Concerned with How Intelligent Machines Should Behave, They Often Maintain That Isaac Asimov has Already Given Us an Ideal Set of Rules for Such Machines. They Have in Mind Asimov's Three Laws of Robotics: 1. A Robot May Not Injure a Human Being, or, Through Inaction, Allow a Human. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press
  8. Susan Leigh Anderson (2011). Philosophical Concerns with Machine Ethics. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press
  9. Susan Leigh Anderson (2008). Asimov's “Three Laws of Robotics” and Machine Metaethics. AI and Society 22 (4):477-493.
    Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   6 citations  
  10. Susan Leigh Anderson & Michael Anderson (2011). A Prima Facie Duty Approach to Machine Ethics Machine Learning of Features of Ethical Dilemmas, Prima Facie Duties, and Decision Principles Through a Dialogue with Ethicists. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  11. Susan Anderson & Michael Anderson (eds.) (2011). Machine Ethics. Cambridge University Press.
    The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   12 citations  
  12. Stuart Armstrong, Anders Sandberg & Nick Bostrom (2012). Thinking Inside the Box: Controlling and Using an Oracle AI. Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  13. Peter M. Asaro (2006). What Should We Want From a Robot Ethic. International Review of Information Ethics 6 (12):9-16.
    There are at least three things we might mean by "ethics in robotics": the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a continuum of agency that lies (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   10 citations  
  14. Hutan Ashrafian (forthcoming). Can Artificial Intelligences Suffer From Mental Illness? A Philosophical Matter to Consider. Science and Engineering Ethics:1-10.
    The potential for artificial intelligences and robotics in achieving the capacity of consciousness, sentience and rationality offers the prospect that these agents have minds. If so, then there may be a potential for these minds to become dysfunctional, or for artificial intelligences and robots to suffer from mental illness. The existence of artificially intelligent psychopathology can be interpreted through the philosophical perspectives of mental illness. This offers new insights into what it means to have either robot or human mental disorders, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  15. Hutan Ashrafian (2015). AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics. Science and Engineering Ethics 21 (1):29-40.
    The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human harm, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  16. Hutan Ashrafian (2015). Artificial Intelligence and Robot Responsibilities: Innovating Beyond Rights. Science and Engineering Ethics 21 (2):317-326.
    The enduring innovations in artificial intelligence and robotics offer the promised capacity of computer consciousness, sentience and rationality. The development of these advanced technologies have been considered to merit rights, however these can only be ascribed in the context of commensurate responsibilities and duties. This represents the discernable next-step for evolution in this field. Addressing these needs requires attention to the philosophical perspectives of moral responsibility for artificial intelligence and robotics. A contrast to the moral status of animals may be (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  17. Phil Badger (2014). The Morality Machine. Philosophy Now 104:24-27.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  18. William Sims Bainbridge (2012). Whole-Personality Emulation. International Journal of Machine Consciousness 4 (01):159-175.
  19. Parthasarathi Banerjee (2007). Technology of Culture: The Roadmap of a Journey Undertaken. [REVIEW] AI and Society 21 (4):411-419.
    Artificial intelligence (AI) impacts society and an individual in many subtler and deeper ways than machines based upon the physics and mechanics of descriptive objects. The AI project involves thus culture and provides scope to liberational undertakings. Most importantly AI implicates human ethical and attitudinal bearings. This essay explores how previous authors in this journal have explored related issues and how such discourses have provided to the present world a roadmap that can be followed to engage in discourses with ethical (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  20. Xabier Barandiaran, E. Di Paolo & M. Rohde (2009). Defining Agency: Individuality, Normativity, Asymmetry, and Spatio-Temporality in Action. Adaptive Behavior 17 (5):367-386.
    The concept of agency is of crucial importance in cognitive science and artificial intelligence, and it is often used as an intuitive and rather uncontroversial term, in contrast to more abstract and theoretically heavy-weighted terms like “intentionality”, “rationality” or “mind”. However, most of the available definitions of agency are either too loose or unspecific to allow for a progressive scientific program. They implicitly and unproblematically assume the features that characterize agents, thus obscuring the full potential and challenge of modeling agency. (...)
    Remove from this list  
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography   13 citations  
  21. John Basl (2014). Machines as Moral Patients We Shouldn't Care About (Yet): The Interests and Welfare of Current Machines. Philosophy and Technology 27 (1):79-96.
    In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, they (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  22. John Basl (2013). The Ethics of Creating Artificial Consciousness. APA Newsletter on Philosophy and Computers 13 (1):23-29.
  23. Fiorella Battaglia & Nikil Mukerji (2015). Technikethik. In Julian Nida-Rümelin, Irina Spiegel & Markus Tiedemann (eds.), Handbuch Philosophie und Ethik - Band 2: Disziplinen und Themen. UTB 288-295.
  24. Fiorella Battaglia, Nikil Mukerji & Julian Nida-Rümelin (2014). Science, Technology, and Responsibility. In Fiorella Battaglia, Nikil Mukerji & Julian Nida-Rümelin (eds.), Rethinking Responsibility in Science and Technology. Pisa University Press 7-11.
    The empirical circumstances in which human beings ascribe responsibility to one another are subject to change. Science and technology play a great part in this transformation process. Therefore, it is important for us to rethink the idea, the role and the normative standards behind responsibility in a world that is constantly changing under the influence of scientific and technological progress. This volume is a contribution to that joint societal effort.
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  25. Anthony F. Beavers, What Can A Robot Teach Us About Kantian Ethics?," in Process".
    In this paper, I examine a variety of agents that appear in Kantian ethics in order to determine which would be necessary to make a robot a genuine moral agent. However, building such an agent would require that we structure into a robot’s behavioral repertoire the possibility for immoral behavior, for only then can the moral law, according to Kant, manifest itself as an ought, a prerequisite for being able to hold an agent morally accountable for its actions. Since building (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  26. Anthony F. Beavers (forthcoming). Moral Machines and the Threat of Ethical Nihilism. In Patrick Lin, George Bekey & Keith Abney (eds.), Robot Ethics: The Ethical and Social Implication of Robotics.
    In his famous 1950 paper where he presents what became the benchmark for success in artificial intelligence, Turing notes that "at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted" (Turing 1950, 442). Kurzweil (1990) suggests that Turing's prediction was correct, even if no machine has yet to pass the Turing Test. In the wake of the (...)
    Remove from this list  
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography  
  27. Barbara Becker (2006). Social Robots-Emotional Agents: Some Remarks on Naturalizing Man-Machine Interaction. International Review of Information Ethics 6:37-45.
    The construction of embodied conversational agents - robots as well as avatars - seem to be a new challenge in the field of both cognitive AI and human-computer-interface development. On the one hand, one aims at gaining new insights in the development of cognition and communication by constructing intelligent, physical instantiated artefacts. On the other hand people are driven by the idea, that humanlike mechanical dialog-partners will have a positive effect on human-machine-communication. In this contribution I put for discussion whether (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   2 citations  
  28. Paul Bello & Selmer Bringsjord (2013). On How to Build a Moral Machine. Topoi 32 (2):251-266.
    Herein we make a plea to machine ethicists for the inclusion of constraints on their theories consistent with empirical data on human moral cognition. As philosophers, we clearly lack widely accepted solutions to issues regarding the existence of free will, the nature of persons and firm conditions on moral agency/patienthood; all of which are indispensable concepts to be deployed by any machine able to make moral judgments. No agreement seems forthcoming on these matters, and we don’t hold out hope for (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    My bibliography  
  29. Oliver Bendel (2016). Considerations About the Relationship Between Animal and Machine Ethics. AI and Society 31 (1):103-108.
  30. Don Berkich (2002). Autonomous Machine Agency. Dissertation, University of Massachusetts Amherst
    Is it possible to construct a machine that can act of its own accord? There are a number of skeptical arguments which conclude that autonomous machine agency is impossible. Yet if autonomous machine agency is impossible, then serious doubt is cast on the possibility of autonomous human action, at least on the widely held assumption that some form of materialism is true. The purpose of this dissertation is to show that autonomous machine agency is possible, thereby showing that the autonomy (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  31. Ashok J. Bharucha, Alex John London, David Barnard, Howard Wactlar, Mary Amanda Dew & Charles F. Reynolds (2006). Ethical Considerations in the Conduct of Electronic Surveillance Research. Journal of Law, Medicine & Ethics 34 (3):611-619.
    The extant clinical literature indicates profound problems in the assessment, monitoring, and documentation of care in long-term care facilities. The lack of adequate resources to accommodate higher staff-to-resident ratios adds additional urgency to the goal of identifying more costeffective mechanisms to provide care oversight. The ever expanding array of electronic monitoring technologies in the clinical research arena demands a conceptual and pragmatic framework for the resolution of ethical tensions inherent in the use of such innovative tools. CareMedia is a project (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  32. Russell Blackford & Damien Broderick (eds.) (2014). Intelligence Unbound: The Future of Uploaded and Machine Minds. Wiley-Blackwell.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  33. Russell Blackford & Damien Broderick (eds.) (2014). Intelligence Unbound: The Future of Uploaded and Machine Minds. Wiley-Blackwell.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  34. Whitby Blay (2013). When is Any Agent a Moral Agent?: Reflections on Machine Consciousness and Moral Agency. International Journal of Machine Consciousness 5 (1).
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  35. Magnus Boman (1999). Norms in Artificial Decision Making. Artificial Intelligence and Law 7 (1):17-35.
    A method for forcing norms onto individual agents in a multi-agent system is presented. The agents under study are supersoft agents: autonomous artificial agents programmed to represent and evaluate vague and imprecise information. Agents are further assumed to act in accordance with advice obtained from a normative decision module, with which they can communicate. Norms act as global constraints on the evaluations performed in the decision module and hence no action that violates a norm will be suggested to any agent. (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  36. Selmer Bringsjord (2007). Ethical Robots: The Future Can Heed Us. [REVIEW] AI and Society 22 (4):539-550.
    Bill Joy’s deep pessimism is now famous. Why the Future Doesn’t Need Us, his defense of that pessimism, has been read by, it seems, everyone—and many of these readers, apparently, have been converted to the dark side, or rather more accurately, to the future-is-dark side. Fortunately (for us; unfortunately for Joy), the defense, at least the part of it that pertains to AI and robotics, fails. Ours may be a dark future, but we cannot know that on the basis of (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  37. Selmer Bringsjord, Joshua Taylor, Bram van Heuveln, Konstantine Arkoudas, Micah Clark & Ralph Wojtowicz (2011). Piagetian Roboethics Via Category Theory Moving Beyond Mere Formal Operations to Engineer Robots Whose Decisions Are Guaranteed to Be Ethically Correct. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  38. David J. Calverley (2011). To Some, the Question of Whether Legal Rights Should, or Even. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press 213.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  39. David J. Calverley (2007). Imagining a Non-Biological Machine as a Legal Person. AI and Society 22 (4):523-537.
    As non-biological machines come to be designed in ways which exhibit characteristics comparable to human mental states, the manner in which the law treats these entities will become increasingly important both to designers and to society at large. The direct question will become whether, given certain attributes, a non-biological machine could ever be viewed as a legal person. In order to begin to understand the ramifications of this question, this paper starts by exploring the distinction between the related concepts of (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  40. Rafael Capurro (2012). Toward a Comparative Theory of Agents. AI and Society 27 (4):479-488.
    The purpose of this paper is to address some of the questions on the notion of agent and agency in relation to property and personhood. I argue that following the Kantian criticism of Aristotelian metaphysics, contemporary biotechnology and information and communication technologies bring about a new challenge—this time, with regard to the Kantian moral subject understood in the subject’s unique metaphysical qualities of dignity and autonomy. The concept of human dignity underlies the foundation of many democratic systems, particularly in Europe (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  41. Ginevra Castellano & Christopher Peters (2010). Socially Perceptive Robots: Challenges and Concerns. Interaction Studies 11 (2):201-207.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  42. Mar Cello Guarim (2011). Computational Neural Modeling and the Philosophy of Ethics Reflections on the Particularism-Generalism Debate. In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press
  43. Anthony Chemero, Ascribing Moral Value and the Embodied Turing Test.
    What would it take for an artificial agent to be treated as having moral value? As a first step toward answering this question, we ask what it would take for an artificial agent to be capable of the sort of autonomous, adaptive social behavior that is characteristic of the animals that humans interact with. We propose that this sort of capacity is best measured by what we call the Embodied Turing Test. The Embodied Turing test is a test in which (...)
    Remove from this list  
    Translate
     
     
    Export citation  
     
    My bibliography  
  44. Mark Coeckelbergh (2013). David J. Gunkel: The Machine Question: Critical Perspectives on AI, Robots, and Ethics. [REVIEW] Ethics and Information Technology 15 (3):235-238.
  45. Mark Coeckelbergh (2009). Virtual Moral Agency, Virtual Moral Responsibility: On the Moral Significance of the Appearance, Perception, and Performance of Artificial Agents. [REVIEW] AI and Society 24 (2):181-189.
  46. Jennifer C. Cook (2006). Machine and Metaphor: The Ethics of Language in American Realism. Routledge.
    American literary realism burgeoned during a period of tremendous technological innovation. Because the realists evinced not only a fascination with this new technology but also an ethos that seems to align itself with science, many have paired the two fields rather unproblematically. But this book demonstrates that many realist writers, from Mark Twain to Stephen Crane, Charles W. Chesnutt to Edith Wharton, felt a great deal of anxiety about the advent of new technologies – precisely at the crucial intersection of (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  47. Roberto Cordeschi & Guglielmo Tamburrini (2005). Intelligent Machines and Warfare: Historical Debates and Epistemologically Motivated Concerns. In L. Magnani (ed.), European Computing and Philosophy Conference (ECAP 2004). College Publications
    The early examples of self-directing robots attracted the interest of both scientific and military communities. Biologists regarded these devices as material models of animal tropisms. Engineers envisaged the possibility of turning self-directing robots into new “intelligent” torpedoes during World War I. Starting from World War II, more extensive interactions developed between theoretical inquiry and applied military research on the subject of adaptive and intelligent machinery. Pioneers of Cybernetics were involved in the development of goal-seeking warfare devices. But collaboration occasionally turned (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  48. Hilde Corneliussen (2005). ‘I Fell in Love with the Machine’ Women’s Pleasure in Computing. Journal of Information, Communication and Ethics in Society 3 (4):233-241.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  49. Gordana Dodig Crnkovic & Baran Çürüklü (2012). Robots: Ethical by Design. Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Remove from this list   Direct download (11 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  50. Emad Abdel Rahim Dahiyat (2007). Intelligent Agents and Contracts: Is a Conceptual Rethink Imperative? [REVIEW] Artificial Intelligence and Law 15 (4):375-390.
    The emergence of intelligent software agents that operate autonomously with little or no human intervention has generated many doctrinal questions at a conceptual level and has challenged the traditional rules of contract especially those relating to the intention as an essential requirement of any contract conclusion. In this paper, we will try to explore some of these challenges, and shed light on the conflict between the traditional contract theory and the transactional practice in the case of using intelligent software agents. (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
1 — 50 / 165