This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related categories
Siblings:
79 found
Search inside:
(import / add options)   Order:
1 — 50 / 79
  1. M. Anderson, S. L. Anderson & C. Armen (eds.) (2005). Association for the Advancement of Artificial Intelligence Fall Symposium Technical Report.
    Remove from this list  
    Translate
     
     
    Export citation  
     
    My bibliography  
  2. Stuart Armstrong, Anders Sandberg & Nick Bostrom (2012). Thinking Inside the Box: Controlling and Using an Oracle AI. Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  3. Peter M. Asaro (2006). What Should We Want From a Robot Ethic. International Review of Information Ethics 6 (12):9-16.
    There are at least three things we might mean by "ethics in robotics": the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a continuum of agency that lies (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   10 citations  
  4. Hutan Ashrafian (2015). AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics. Science and Engineering Ethics 21 (1):29-40.
    The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human harm, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  5. Hutan Ashrafian (2015). Artificial Intelligence and Robot Responsibilities: Innovating Beyond Rights. Science and Engineering Ethics 21 (2):317-326.
    The enduring innovations in artificial intelligence and robotics offer the promised capacity of computer consciousness, sentience and rationality. The development of these advanced technologies have been considered to merit rights, however these can only be ascribed in the context of commensurate responsibilities and duties. This represents the discernable next-step for evolution in this field. Addressing these needs requires attention to the philosophical perspectives of moral responsibility for artificial intelligence and robotics. A contrast to the moral status of animals may (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  6. Phil Badger (2014). The Morality Machine. Philosophy Now 104:24-27.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  7. William Sims Bainbridge (2012). Whole-Personality Emulation. International Journal of Machine Consciousness 4 (01):159-175.
  8. Parthasarathi Banerjee (2007). Technology of Culture: The Roadmap of a Journey Undertaken. [REVIEW] AI and Society 21 (4):411-419.
    Artificial intelligence (AI) impacts society and an individual in many subtler and deeper ways than machines based upon the physics and mechanics of descriptive objects. The AI project involves thus culture and provides scope to liberational undertakings. Most importantly AI implicates human ethical and attitudinal bearings. This essay explores how previous authors in this journal have explored related issues and how such discourses have provided to the present world a roadmap that can be followed to engage in discourses with ethical (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  9. Fiorella Battaglia & Nikil Mukerji (2015). Technikethik. In Julian Nida-Rümelin, Irina Spiegel & Markus Tiedemann (eds.), Handbuch Philosophie und Ethik - Band 2: Disziplinen und Themen. UTB 288-295.
  10. Fiorella Battaglia, Nikil Mukerji & Julian Nida-Rümelin (2014). Science, Technology, and Responsibility. In Fiorella Battaglia, Nikil Mukerji & Julian Nida-Rümelin (eds.), Rethinking Responsibility in Science and Technology. Pisa University Press 7-11.
    The empirical circumstances in which human beings ascribe responsibility to one another are subject to change. Science and technology play a great part in this transformation process. Therefore, it is important for us to rethink the idea, the role and the normative standards behind responsibility in a world that is constantly changing under the influence of scientific and technological progress. This volume is a contribution to that joint societal effort.
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  11. Oliver Bendel (2016). Considerations About the Relationship Between Animal and Machine Ethics. AI and Society 31 (1):103-108.
  12. Ashok J. Bharucha, Alex John London, David Barnard, Howard Wactlar, Mary Amanda Dew & Charles F. Reynolds (2006). Ethical Considerations in the Conduct of Electronic Surveillance Research. Journal of Law, Medicine & Ethics 34 (3):611-619.
    The extant clinical literature indicates profound problems in the assessment, monitoring, and documentation of care in long-term care facilities. The lack of adequate resources to accommodate higher staff-to-resident ratios adds additional urgency to the goal of identifying more costeffective mechanisms to provide care oversight. The ever expanding array of electronic monitoring technologies in the clinical research arena demands a conceptual and pragmatic framework for the resolution of ethical tensions inherent in the use of such innovative tools. CareMedia is a project (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  13. Whitby Blay (2013). When is Any Agent a Moral Agent?: Reflections on Machine Consciousness and Moral Agency. International Journal of Machine Consciousness 5 (1).
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  14. Magnus Boman (1999). Norms in Artificial Decision Making. Artificial Intelligence and Law 7 (1):17-35.
    A method for forcing norms onto individual agents in a multi-agent system is presented. The agents under study are supersoft agents: autonomous artificial agents programmed to represent and evaluate vague and imprecise information. Agents are further assumed to act in accordance with advice obtained from a normative decision module, with which they can communicate. Norms act as global constraints on the evaluations performed in the decision module and hence no action that violates a norm will be suggested to any agent. (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  15. Selmer Bringsjord (2007). Ethical Robots: The Future Can Heed Us. [REVIEW] AI and Society 22 (4):539-550.
    Bill Joy’s deep pessimism is now famous. Why the Future Doesn’t Need Us, his defense of that pessimism, has been read by, it seems, everyone—and many of these readers, apparently, have been converted to the dark side, or rather more accurately, to the future-is-dark side. Fortunately (for us; unfortunately for Joy), the defense, at least the part of it that pertains to AI and robotics, fails. Ours may be a dark future, but we cannot know that on the basis of (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  16. David J. Calverley (2007). Imagining a Non-Biological Machine as a Legal Person. AI and Society 22 (4):523-537.
    As non-biological machines come to be designed in ways which exhibit characteristics comparable to human mental states, the manner in which the law treats these entities will become increasingly important both to designers and to society at large. The direct question will become whether, given certain attributes, a non-biological machine could ever be viewed as a legal person. In order to begin to understand the ramifications of this question, this paper starts by exploring the distinction between the related concepts of (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  17. Ginevra Castellano & Christopher Peters (2010). Socially Perceptive Robots: Challenges and Concerns. Interaction Studies 11 (2):201-207.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  18. Mark Coeckelbergh (2009). Virtual Moral Agency, Virtual Moral Responsibility: On the Moral Significance of the Appearance, Perception, and Performance of Artificial Agents. [REVIEW] AI and Society 24 (2):181-189.
  19. Roberto Cordeschi & Guglielmo Tamburrini (2005). Intelligent Machines and Warfare: Historical Debates and Epistemologically Motivated Concerns. In L. Magnani (ed.), European Computing and Philosophy Conference (ECAP 2004). College Publications
    The early examples of self-directing robots attracted the interest of both scientific and military communities. Biologists regarded these devices as material models of animal tropisms. Engineers envisaged the possibility of turning self-directing robots into new “intelligent” torpedoes during World War I. Starting from World War II, more extensive interactions developed between theoretical inquiry and applied military research on the subject of adaptive and intelligent machinery. Pioneers of Cybernetics were involved in the development of goal-seeking warfare devices. But collaboration occasionally turned (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  20. Emad Abdel Rahim Dahiyat (2007). Intelligent Agents and Contracts: Is a Conceptual Rethink Imperative? [REVIEW] Artificial Intelligence and Law 15 (4):375-390.
    The emergence of intelligent software agents that operate autonomously with little or no human intervention has generated many doctrinal questions at a conceptual level and has challenged the traditional rules of contract especially those relating to the intention as an essential requirement of any contract conclusion. In this paper, we will try to explore some of these challenges, and shed light on the conflict between the traditional contract theory and the transactional practice in the case of using intelligent software agents. (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  21. John Danaher (forthcoming). Will Life Be Worth Living in a World Without Work? Technological Unemployment and the Meaning of Life. Science and Engineering Ethics:1-24.
    Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the (presumed) efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  22. John Danaher (2015). Why AI Doomsayers Are Like Sceptical Theists and Why It Matters. Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  23. Paul B. de Laat (2015). Trusting the (Ro)Botic Other: By Assumption? SIGCAS Computers and Society 45 (3):255-260.
    How may human agents come to trust (sophisticated) artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet (ro)bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  24. Ezio Di Nucci & Filippo Santoni de Sio (forthcoming). Who’s Afraid of Robots? Fear of Automation and the Ideal of Direct Control. In Fiorella Battaglia & Natalie Weidenfeld (eds.), Roboethics in Film. Pisa University Press
    We argue that lack of direct and conscious control is not, in principle, a reason to be afraid of machines in general and robots in particular: in order to articulate the ethical and political risks of increasing automation one must, therefore, tackle the difficult task of precisely delineating the theoretical and practical limits of sustainable delegation to robots.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  25. Gordana Dodig Crnkovic & Daniel Persson (2008). Sharing Moral Responsibility with Robots: A Pragmatic Approach. In Holst, Per Kreuger & Peter Funk (eds.), Frontiers in Artificial Intelligence and Applications Volume 173. IOS Press Books
    Roboethics is a recently developed field of applied ethics which deals with the ethical aspects of technologies such as robots, ambient intelligence, direct neural interfaces and invasive nano-devices and intelligent soft bots. In this article we look specifically at the issue of (moral) responsibility in artificial intelligent systems. We argue for a pragmatic approach, where responsibility is seen as a social regulatory mechanism. We claim that having a system which takes care of certain tasks intelligently, learning from experience and making (...)
    Remove from this list  
    Translate
      Direct download  
     
    Export citation  
     
    My bibliography   3 citations  
  26. Dominika Dzwonkowska (2013). The Machine Question: Critical Perspectives on AI, Robots, and Ethics. By David J. Gunkel. International Philosophical Quarterly 53 (1):91-93.
  27. David Feil-Seifer & Maja J. Mataric (2010). Dry Your Eyes: Examining the Roles of Robots for Childcare Applications. Interaction Studies 11 (2):208-213.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  28. Luciano Floridi & J. W. Sanders (2004). On the Morality of Artificial Agents. Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Remove from this list   Direct download (18 more)  
     
    Export citation  
     
    My bibliography   67 citations  
  29. Christopher Grau (2011). There is No 'I' in 'Robot': Robots and Utilitarianism (Expanded & Revised). In Susan Anderson & Michael Anderson (eds.), Machine Ethics. Cambridge University Press 451.
    Utilizing the film I, Robot as a springboard, I here consider the feasibility of robot utilitarians, the moral responsibilities that come with the creation of ethical robots, and the possibility of distinct ethics for robot-robot interaction as opposed to robot-human interaction. (This is a revised and expanded version of an essay that originally appeared in IEEE: Intelligent Systems.).
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography   4 citations  
  30. Christopher Grau (2006). There is No ‘I’ in ‘Robot’: Robots & Utilitarianism. IEEE Intelligent Systems 21 (4):52-55.
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography   4 citations  
  31. Graham Greenleaf, Andrew Mowbray & Peter Dijk (1995). Representing and Using Legal Knowledge in Integrated Decision Support Systems: Datalex Workstations. [REVIEW] Artificial Intelligence and Law 3 (1-2):97-142.
    There is more to legal knowledge representation than knowledge-bases. It is valuable to look at legal knowledge representation and its implementation across the entire domain of computerisation of law, rather than focussing on sub-domains such as legal expert systems. The DataLex WorkStation software and applications developed using it are used to provide examples. Effective integration of inferencing, hypertext and text retrieval can overcome some of the limitations of these current paradigms of legal computerisation which are apparent when they are used (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    My bibliography  
  32. David J. Gunkel (2014). A Vindication of the Rights of Machines. Philosophy and Technology 27 (1):113-132.
    This essay responds to the machine question in the affirmative, arguing that artifacts, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in four parts or movements. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. In the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography   6 citations  
  33. B. Jain, Web Browser as a Forensic Computing Tool.
    Cyber crimes have become more prevalent and damaging with reported annual loss of billions of dollars globally. The anonymity on the Internet and the possibility of launching attacks remotely have made it more difficult to find the origin of the crime and tracing back the criminals. Cyber crime consists of specific crimes dealing with computers and networks (such as hacking) and the facilitation of traditional crime through the use of computers (child pornography, hate crimes, telemarketing /Internet fraud). In addition to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    My bibliography  
  34. Deborah G. Johnson & Thomas M. Powers (2008). Computers as Surrogate Agents. In M. J. van den Joven & J. Weckert (eds.), Information Technology and Moral Philosophy. Cambridge University Press 251.
  35. Gert-Jan Lokhorst (2011). Erratum To: Computational Meta-Ethics. [REVIEW] Minds and Machines 21 (3):475-475.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  36. Gert-Jan Lokhorst (2011). Computational Meta-Ethics. Minds and Machines 21 (2):261-274.
    It has been argued that ethically correct robots should be able to reason about right and wrong. In order to do so, they must have a set of do’s and don’ts at their disposal. However, such a list may be inconsistent, incomplete or otherwise unsatisfactory, depending on the reasoning principles that one employs. For this reason, it might be desirable if robots were to some extent able to reason about their own reasoning—in other words, if they had some meta-ethical capacities. (...)
    Remove from this list   Direct download (9 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  37. Catrin Misselhorn, Ulrike Pompe & Mog Stapleton (2013). Ethical Considerations Regarding the Use of Social Robots in the Fourth Age. Geropsych 26 (2):121-133.
    The debate about the use of robots in the care of older adults has often been dominated by either overly optimistic visions (coming particularly from Japan), in which robots are seamlessly incorporated into society thereby enhancing quality of life for everyone; or by extremely pessimistic scenarios that paint such a future as horrifying. We reject this dichotomy and argue for a more differentiated ethical evaluation of the possibilities and risks involved with the use of social robots. In a critical discussion (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography   1 citation  
  38. Javier R. Movellan (2010). Warning: The Author of This Document May Have No Mental States. Read at Your Own Risk. Interaction Studies 11 (2):238-245.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  39. Nikil Mukerji (forthcoming). Autonomous Killer Drones. In Ezio Di Nucci & Filippo Santoni de Sio (eds.), Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons. Ashgate
    In this paper, I address the question whether drones, which may soon possess the ability to make autonomous choices, should be allowed to make life-and-death decisions and act on them. To this end, I examine an argument proposed by Rob Sparrow, who dismisses the ethicality of what he calls “killer robots”. If successful, his conclusion would extend to the use of what I call autonomous killer drones, which are special kinds of killer robots. In Sparrow’s reasoning, considerations of responsibility occupy (...)
    Remove from this list  
     
    Export citation  
     
    My bibliography  
  40. Vincent C. Müller (forthcoming). Autonomous Killer Robots Are Probably Good News. In Ezio Di Nucci & Filippo Santonio de Sio (eds.), Drones and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons. Ashgate
    Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take responsibility away (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  41. Vincent C. Müller (2016). Editorial: Risks of Artificial Intelligence. In Risks of artificial intelligence. CRC Press - Chapman & Hall 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and critically (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  42. Vincent C. Müller (ed.) (2016). Risks of Artificial Intelligence. CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  43. Vincent C. Müller (2014). Editorial: Risks of General Artificial Intelligence. Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
    This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, what (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  44. Vincent C. Müller (ed.) (2014). Risks of Artificial General Intelligence. Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  45. Vincent C. Müller (2009). Would You Mind Being Watched by Machines? Privacy Concerns in Data Mining. AI and Society 23 (4):529-544.
    "Data mining is not an invasion of privacy because access to data is only by machines, not by people": this is the argument that is investigated here. The current importance of this problem is developed in a case study of data mining in the USA for counterterrorism and other surveillance purposes. After a clarification of the relevant nature of privacy, it is argued that access by machines cannot warrant the access to further information, since the analysis will have to be (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  46. Vincent C. Müller & Nick Bostrom (2014). Future Progress in Artificial Intelligence: A Poll Among Experts. AI Matters 1 (1):9-11.
    [This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  47. Vincent C. Müller & Thomas W. Simpson (2015). Réguler les robots-tueurs, plutôt que les interdire. Multitudes 58 (1):77.
    This is the short version, in French translation by Anne Querrien, of the originally jointly authored paper: Müller, Vincent C., ‘Autonomous killer robots are probably good news’, in Ezio Di Nucci and Filippo Santoni de Sio, <span class='Hi'>Drones</span> and responsibility: Legal, philosophical and socio-technical perspectives on the use of remotely controlled weapons. - - - L’article qui suit présente un nouveau système d’armes fondé sur des robots qui risque d’être prochainement utilisé. À la différence des <span class='Hi'>drones</span> qui sont manoeuvrés (...)
    Remove from this list  
    Translate
      Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  48. Caspar Oesterheld (forthcoming). Formalizing Preference Utilitarianism in Physical World Models. Synthese:1-13.
    Most ethical work is done at a low level of formality. This makes practical moral questions inaccessible to formal and natural sciences and can lead to misunderstandings in ethical discussion. In this paper, we use Bayesian inference to introduce a formalization of preference utilitarianism in physical world models, specifically cellular automata. Even though our formalization is not immediately applicable, it is a first step in providing ethics and ultimately the question of how to “make the world better” with a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  49. Peter Olsthoorn & Lambèr Royakkers, Risks and Robots – Some Ethical Issues. Archive International Society for Military Ethics, 2011.
    While in many countries the use of unmanned systems is still in its infancy, other countries, most notably the US and Israel, are much ahead. Most of the systems in operation today are unarmed and are mainly used for reconnaissance and clearing improvised explosive devices. But over the last years the deployment of armed military robots is also on the increase, especially in the air. This might make unethical behavior less likely to happen, seeing that unmanned systems are immune to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  50. Erica Palmerini, Federico Azzarri, Fiorella Battaglia, Andrea Bertolini, Antonio Carnevale, Jacopo Carpaneto, Filippo Cavallo, Angela Di Carlo, Marco Cempini, Marco Controzzi, Bert-Jaap Koops, Federica Lucivero, Nikil Mukerji, Luca Nocco, Alberto Pirni & Huma Shah (2014). Guidelines on Regulating Robotics. Robolaw (FP7 Project).
1 — 50 / 79