This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related categories
Siblings:
64 found
Search inside:
(import / add options)   Sort by:
1 — 50 / 64
  1. M. Anderson, S. L. Anderson & C. Armen (eds.) (2005). Association for the Advancement of Artificial Intelligence Fall Symposium Technical Report.
    Remove from this list |
    Translate to English
    |
     
    My bibliography  
     
    Export citation  
  2. Hutan Ashrafian (2015). AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics. Science and Engineering Ethics 21 (1):29-40.
    The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human harm, (...)
    Remove from this list | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  3. Hutan Ashrafian (2015). Artificial Intelligence and Robot Responsibilities: Innovating Beyond Rights. Science and Engineering Ethics 21 (2):317-326.
    The enduring innovations in artificial intelligence and robotics offer the promised capacity of computer consciousness, sentience and rationality. The development of these advanced technologies have been considered to merit rights, however these can only be ascribed in the context of commensurate responsibilities and duties. This represents the discernable next-step for evolution in this field. Addressing these needs requires attention to the philosophical perspectives of moral responsibility for artificial intelligence and robotics. A contrast to the moral status of animals may be (...)
    Remove from this list | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  4. Phil Badger (2014). The Morality Machine. Philosophy Now 104:24-27.
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  5. William Sims Bainbridge (2012). Whole-Personality Emulation. International Journal of Machine Consciousness 4 (01):159-175.
  6. Parthasarathi Banerjee (2007). Technology of Culture: The Roadmap of a Journey Undertaken. [REVIEW] AI and Society 21 (4):411-419.
    Artificial intelligence (AI) impacts society and an individual in many subtler and deeper ways than machines based upon the physics and mechanics of descriptive objects. The AI project involves thus culture and provides scope to liberational undertakings. Most importantly AI implicates human ethical and attitudinal bearings. This essay explores how previous authors in this journal have explored related issues and how such discourses have provided to the present world a roadmap that can be followed to engage in discourses with ethical (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  7. Fiorella Battaglia & Nikil Mukerji (forthcoming). Technikethik. In Julian Nida-Rümelin, Irina Spiegel & Markus Tiedemann (eds.), Handbuch Philosophie und Ethik - Band 2: Disziplinen und Themen. UTB.
  8. Fiorella Battaglia, Nikil Mukerji & Julian Nida-Rümelin (2014). Science, Technology, and Responsibility. In Fiorella Battaglia, Nikil Mukerji & Julian Nida-Rümelin (eds.), Rethinking Responsibility in Science and Technology. Pisa University Press. 7-11.
    The empirical circumstances in which human beings ascribe responsibility to one another are subject to change. Science and technology play a great part in this transformation process. Therefore, it is important for us to rethink the idea, the role and the normative standards behind responsibility in a world that is constantly changing under the influence of scientific and technological progress. This volume is a contribution to that joint societal effort.
    Remove from this list |
     
    My bibliography  
     
    Export citation  
  9. Oliver Bendel (forthcoming). Considerations About the Relationship Between Animal and Machine Ethics. AI and Society.
    Remove from this list | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  10. Ashok J. Bharucha, Alex John London, David Barnard, Howard Wactlar, Mary Amanda Dew & Charles F. Reynolds (2006). Ethical Considerations in the Conduct of Electronic Surveillance Research. Journal of Law, Medicine & Ethics 34 (3):611-619.
    The extant clinical literature indicates profound problems in the assessment, monitoring, and documentation of care in long-term care facilities. The lack of adequate resources to accommodate higher staff-to-resident ratios adds additional urgency to the goal of identifying more costeffective mechanisms to provide care oversight. The ever expanding array of electronic monitoring technologies in the clinical research arena demands a conceptual and pragmatic framework for the resolution of ethical tensions inherent in the use of such innovative tools. CareMedia is a project (...)
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  11. Whitby Blay (2013). When is Any Agent a Moral Agent?: Reflections on Machine Consciousness and Moral Agency. International Journal of Machine Consciousness 5 (1).
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  12. Magnus Boman (1999). Norms in Artificial Decision Making. Artificial Intelligence and Law 7 (1):17-35.
    A method for forcing norms onto individual agents in a multi-agent system is presented. The agents under study are supersoft agents: autonomous artificial agents programmed to represent and evaluate vague and imprecise information. Agents are further assumed to act in accordance with advice obtained from a normative decision module, with which they can communicate. Norms act as global constraints on the evaluations performed in the decision module and hence no action that violates a norm will be suggested to any agent. (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  13. Nick Bostrom, Ethical Issues in Advanced Artificial Intelligence.
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive (...)
    Remove from this list | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  14. Selmer Bringsjord (2007). Ethical Robots: The Future Can Heed Us. [REVIEW] AI and Society 22 (4):539-550.
    Bill Joy’s deep pessimism is now famous. Why the Future Doesn’t Need Us, his defense of that pessimism, has been read by, it seems, everyone—and many of these readers, apparently, have been converted to the dark side, or rather more accurately, to the future-is-dark side. Fortunately (for us; unfortunately for Joy), the defense, at least the part of it that pertains to AI and robotics, fails. Ours may be a dark future, but we cannot know that on the basis of (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  15. David J. Calverley (2007). Imagining a Non-Biological Machine as a Legal Person. AI and Society 22 (4):523-537.
    As non-biological machines come to be designed in ways which exhibit characteristics comparable to human mental states, the manner in which the law treats these entities will become increasingly important both to designers and to society at large. The direct question will become whether, given certain attributes, a non-biological machine could ever be viewed as a legal person. In order to begin to understand the ramifications of this question, this paper starts by exploring the distinction between the related concepts of (...)
    Remove from this list | Direct download (6 more)  
     
    My bibliography  
     
    Export citation  
  16. Ginevra Castellano & Christopher Peters (2010). Socially Perceptive Robots: Challenges and Concerns. Interaction Studies 11 (2):201-207.
    Remove from this list | Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  17. Mark Coeckelbergh (2009). Virtual Moral Agency, Virtual Moral Responsibility: On the Moral Significance of the Appearance, Perception, and Performance of Artificial Agents. [REVIEW] AI and Society 24 (2):181-189.
  18. Roberto Cordeschi & Guglielmo Tamburrini (2005). Intelligent Machines and Warfare: Historical Debates and Epistemologically Motivated Concerns. In L. Magnani (ed.), European Computing and Philosophy Conference (ECAP 2004). College Publications.
    The early examples of self-directing robots attracted the interest of both scientific and military communities. Biologists regarded these devices as material models of animal tropisms. Engineers envisaged the possibility of turning self-directing robots into new “intelligent” torpedoes during World War I. Starting from World War II, more extensive interactions developed between theoretical inquiry and applied military research on the subject of adaptive and intelligent machinery. Pioneers of Cybernetics were involved in the development of goal-seeking warfare devices. But collaboration occasionally turned (...)
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  19. Emad Abdel Rahim Dahiyat (2007). Intelligent Agents and Contracts: Is a Conceptual Rethink Imperative? [REVIEW] Artificial Intelligence and Law 15 (4):375-390.
    The emergence of intelligent software agents that operate autonomously with little or no human intervention has generated many doctrinal questions at a conceptual level and has challenged the traditional rules of contract especially those relating to the intention as an essential requirement of any contract conclusion. In this paper, we will try to explore some of these challenges, and shed light on the conflict between the traditional contract theory and the transactional practice in the case of using intelligent software agents. (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  20. John Danaher (forthcoming). Why AI Doomsayers Are Like Sceptical Theists and Why It Matters. Minds and Machines:1-16.
    An advanced artificial intelligence (a “superintelligence”) could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate (...)
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  21. Ezio Di Nucci & Filippo Santoni de Sio (forthcoming). Who’s Afraid of Robots? Fear of Automation and the Ideal of Direct Control. In Fiorella Battaglia & Natalie Weidenfeld (eds.), Roboethics in Film. Pisa University Press.
    We argue that lack of direct and conscious control is not, in principle, a reason to be afraid of machines in general and robots in particular: in order to articulate the ethical and political risks of increasing automation one must, therefore, tackle the difficult task of precisely delineating the theoretical and practical limits of sustainable delegation to robots.
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  22. Gordana Dodig Crnkovic & Daniel Persson (2008). Sharing Moral Responsibility with Robots: A Pragmatic Approach. In Holst, Per Kreuger & Peter Funk (eds.), Frontiers in Artificial Intelligence and Applications Volume 173. IOS Press Books.
    Roboethics is a recently developed field of applied ethics which deals with the ethical aspects of technologies such as robots, ambient intelligence, direct neural interfaces and invasive nano-devices and intelligent soft bots. In this article we look specifically at the issue of (moral) responsibility in artificial intelligent systems. We argue for a pragmatic approach, where responsibility is seen as a social regulatory mechanism. We claim that having a system which takes care of certain tasks intelligently, learning from experience and making (...)
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  23. Dominika Dzwonkowska (2013). The Machine Question: Critical Perspectives on AI, Robots, and Ethics. By David J. Gunkel. International Philosophical Quarterly 53 (1):91-93.
  24. David Feil-Seifer & Maja J. Mataric (2010). Dry Your Eyes: Examining the Roles of Robots for Childcare Applications. Interaction Studies 11 (2):208-213.
    Remove from this list | Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  25. Luciano Floridi & J. W. Sanders (2004). On the Morality of Artificial Agents. Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Remove from this list | Direct download (18 more)  
     
    My bibliography  
     
    Export citation  
  26. Christopher Grau (2011). There is No 'I' in 'Robot': Robots and Utilitarianism (Expanded & Revised). In Susan Anderson & Michael Anderson (eds.), Machine Ethics. Cambridge University Press. 451.
    Utilizing the film I, Robot as a springboard, I here consider the feasibility of robot utilitarians, the moral responsibilities that come with the creation of ethical robots, and the possibility of distinct ethics for robot-robot interaction as opposed to robot-human interaction. (This is a revised and expanded version of an essay that originally appeared in IEEE: Intelligent Systems.).
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  27. Christopher Grau (2006). There is No ‘I’ in ‘Robot’: Robots & Utilitarianism. IEEE Intelligent Systems 21 (4):52-55.
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  28. Graham Greenleaf, Andrew Mowbray & Peter Dijk (1995). Representing and Using Legal Knowledge in Integrated Decision Support Systems: Datalex Workstations. [REVIEW] Artificial Intelligence and Law 3 (1-2):97-142.
    There is more to legal knowledge representation than knowledge-bases. It is valuable to look at legal knowledge representation and its implementation across the entire domain of computerisation of law, rather than focussing on sub-domains such as legal expert systems. The DataLex WorkStation software and applications developed using it are used to provide examples. Effective integration of inferencing, hypertext and text retrieval can overcome some of the limitations of these current paradigms of legal computerisation which are apparent when they are used (...)
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  29. David J. Gunkel (2014). A Vindication of the Rights of Machines. Philosophy and Technology 27 (1):113-132.
    This essay responds to the machine question in the affirmative, arguing that artifacts, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in four parts or movements. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. In the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is (...)
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  30. B. Jain, Web Browser as a Forensic Computing Tool.
    Cyber crimes have become more prevalent and damaging with reported annual loss of billions of dollars globally. The anonymity on the Internet and the possibility of launching attacks remotely have made it more difficult to find the origin of the crime and tracing back the criminals. Cyber crime consists of specific crimes dealing with computers and networks (such as hacking) and the facilitation of traditional crime through the use of computers (child pornography, hate crimes, telemarketing /Internet fraud). In addition to (...)
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  31. Deborah G. Johnson & Thomas M. Powers (2008). Computers as Surrogate Agents. In M. J. van den Joven & J. Weckert (eds.), Information Technology and Moral Philosophy. Cambridge University Press. 251.
    Remove from this list | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  32. Gert-Jan Lokhorst (2011). Computational Meta-Ethics. Minds and Machines 21 (2):261-274.
    It has been argued that ethically correct robots should be able to reason about right and wrong. In order to do so, they must have a set of do’s and don’ts at their disposal. However, such a list may be inconsistent, incomplete or otherwise unsatisfactory, depending on the reasoning principles that one employs. For this reason, it might be desirable if robots were to some extent able to reason about their own reasoning—in other words, if they had some meta-ethical capacities. (...)
    Remove from this list | Direct download (9 more)  
     
    My bibliography  
     
    Export citation  
  33. Gert-Jan Lokhorst (2011). Erratum To: Computational Meta-Ethics. [REVIEW] Minds and Machines 21 (3):475-475.
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  34. Catrin Misselhorn, Ulrike Pompe & Mog Stapleton (2013). Ethical Considerations Regarding the Use of Social Robots in the Fourth Age. Geropsych 26 (2):121-133.
    The debate about the use of robots in the care of older adults has often been dominated by either overly optimistic visions (coming particularly from Japan), in which robots are seamlessly incorporated into society thereby enhancing quality of life for everyone; or by extremely pessimistic scenarios that paint such a future as horrifying. We reject this dichotomy and argue for a more differentiated ethical evaluation of the possibilities and risks involved with the use of social robots. In a critical discussion (...)
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  35. Javier R. Movellan (2010). Warning: The Author of This Document May Have No Mental States. Read at Your Own Risk. Interaction Studies 11 (2):238-245.
    Remove from this list | Direct download (4 more)  
     
    My bibliography  
     
    Export citation  
  36. Nikil Mukerji (forthcoming). Autonomous Killer Drones. In Ezio Di Nucci & Filippo Santoni de Sio (eds.), Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons. Ashgate.
    In this paper, I address the question whether drones, which may soon possess the ability to make autonomous choices, should be allowed to make life-and-death decisions and act on them. To this end, I examine an argument proposed by Rob Sparrow, who dismisses the ethicality of what he calls “killer robots”. If successful, his conclusion would extend to the use of what I call autonomous killer drones, which are special kinds of killer robots. In Sparrow’s reasoning, considerations of responsibility occupy (...)
    Remove from this list |
     
    My bibliography  
     
    Export citation  
  37. Peter Olsthoorn & Lambèr Royakkers, Risks and Robots – Some Ethical Issues. Archive International Society for Military Ethics, 2011.
    Remove from this list | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  38. Erica Palmerini, Federico Azzarri, Fiorella Battaglia, Andrea Bertolini, Antonio Carnevale, Jacopo Carpaneto, Filippo Cavallo, Angela Di Carlo, Marco Cempini, Marco Controzzi, Bert-Jaap Koops, Federica Lucivero, Nikil Mukerji, Luca Nocco, Alberto Pirni & Huma Shah (2014). Guidelines on Regulating Robotics. Robolaw (FP7 project).
  39. Steve Petersen (forthcoming). Designing People to Serve. In Patrick Lin, George Bekey & Keith Abney (eds.), Robot Ethics. MIT Press.
    I argue that, contrary to intuition, it would be both possible and permissible to design people - whether artificial or organic - who by their nature desire to do tasks we find unpleasant.
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  40. Dean Petters, Everett Waters & Felix Schonbrodt (2010). Strange Carers: Robots as Attachment Figures and Aids to Parenting. Interaction Studies 11 (2):246-252.
    Remove from this list | Direct download (7 more)  
     
    My bibliography  
     
    Export citation  
  41. Thomas M. Powers (2011). Incremental Machine Ethics. IEEE Robotics and Automation 18 (1):51-58.
    Remove from this list |
    Translate to English
    |
     
    My bibliography  
     
    Export citation  
  42. Thomas M. Powers (2009). Machines and Moral Reasoning. Philosophy Now 72:15-16.
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  43. Thomas M. Powers (2006). Prospects for a Kantian Machine. IEEE Intelligent Systems 21 (4):46-51.
    This paper is reprinted in the book Machine Ethics, eds. M. Anderson and S. Anderson, Cambridge University Press, 2011.
    Remove from this list |
     
    My bibliography  
     
    Export citation  
  44. Lamber Royakkers & Peter Olsthoorn (2014). Military Robots and the Question of Responsibility. International Journal of Technoethics 5 (1):01-14.
    Most unmanned systems used in operations today are unarmed and mainly used for reconnaissance and mine clearing, yet the increase of the number of armed military robots is undeniable. The use of these robots raises some serious ethical questions. For instance: who can be held morally responsible in reason when a military robot is involved in an act of violence that would normally be described as a war crime? In this article, we critically assess the attribution of responsibility with respect (...)
    Remove from this list |
    Translate to English
    | Direct download  
     
    My bibliography  
     
    Export citation  
  45. Christopher Santos-Lang (ed.) (2014). Moral Ecology Approaches to Machine Ethics. Springer.
    Wallach and Allen’s seminal book, Moral Machines: Teaching Robots Right from Wrong, categorized theories of machine ethics by the types of algorithms each employs (e.g., top-down vs. bottom-up), ultimately concluding that a hybrid approach would be necessary. Humans are hybrids individually: our brains are wired to adapt our evaluative approach to our circumstances. For example, stressors can inhibit the action of oxytocin in the brain, thus forcing a nurse who usually acts from subjective empathy to defer to objective rules instead. (...)
    Remove from this list | Direct download  
     
    My bibliography  
     
    Export citation  
  46. Christopher Santos-Lang (2014). Our Responsibility to Manage Evaluative Diversity. Acm Sigcas Computers and Society 44 (2):16-19.
    The ecosystem approach to computer system development is similar to management of biodiversity. Instead of modeling machines after a successful individual, it models machines after successful teams. It includes measuring the evaluative diversity of human teams (i.e. the disparity in ways members conduct the evaluative aspect of decision-making), adding similarly diverse machines to those teams, and monitoring the impact on evaluative balance. This article reviews new research relevant to this approach, especially the validation of a survey instrument for measuring computational (...)
    Remove from this list | Direct download (2 more)  
     
    My bibliography  
     
    Export citation  
  47. Amanda Sharkey & Noel Sharkey (2012). Granny and the Robots: Ethical Issues in Robot Care for the Elderly. Ethics and Information Technology 14 (1):27-40.
    The growing proportion of elderly people in society, together with recent advances in robotics, makes the use of robots in elder care increasingly likely. We outline developments in the areas of robot applications for assisting the elderly and their carers, for monitoring their health and safety, and for providing them with companionship. Despite the possible benefits, we raise and discuss six main ethical concerns associated with: (1) the potential reduction in the amount of human contact; (2) an increase in the (...)
    Remove from this list | Direct download (9 more)  
     
    My bibliography  
     
    Export citation  
  48. Noel Sharkey & Amanda Sharkey (2010). Robot Nannies Get a Wheel in the Door: A Response to the Commentaries. Interaction Studies 11 (2):302-313.
    Remove from this list | Direct download (3 more)  
     
    My bibliography  
     
    Export citation  
  49. Noel Sharkey & Amanda Sharkey (2010). The Crying Shame of Robot Nannies: An Ethical Appraisal. Interaction Studies 11 (2):161-190.
    Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total childcare is not yet being promoted, there are indications that it is 'on the cards'. We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, privacy, (...)
    Remove from this list | Direct download (5 more)  
     
    My bibliography  
     
    Export citation  
  50. Robert Sparrow & Linda Sparrow (2006). In the Hands of Machines? The Future of Aged Care. Minds and Machines 16 (2):141-161.
    It is remarkable how much robotics research is promoted by appealing to the idea that the only way to deal with a looming demographic crisis is to develop robots to look after older persons. This paper surveys and assesses the claims made on behalf of robots in relation to their capacity to meet the needs of older persons. We consider each of the roles that has been suggested for robots in aged care and attempt to evaluate how successful robots might (...)
    Remove from this list | Direct download (8 more)  
     
    My bibliography  
     
    Export citation  
1 — 50 / 64