The End of Materialism: How Evidence of the Paranormal Is Bringing Science and Spirit Together by Charles T. Tart. Philosophy of Personal Identity and Multiple Personality by Logi Gunnarsson. Eusapia Palladino and Her Phenomena by Hereward Carrington. Can the Mind Survive beyond Death? In Pursuit of Scientific Evidence by Satwant K. Pasricha. Morphic Resonance: The Nature of Formative Causation by Rupert Sheldrake. A New Science of Life: The Hypothesis of Formative Causation by Rupert Sheldrake. “Why AI Is a Dangerous Dream,” (...) Opinion, Interview with Noel Sharkey by Nic Fleming. (shrink)
The growing proportion of elderly people in society, together with recent advances in robotics, makes the use of robots in elder care increasingly likely. We outline developments in the areas of robot applications for assisting the elderly and their carers, for monitoring their health and safety, and for providing them with companionship. Despite the possible benefits, we raise and discuss six main ethical concerns associated with: (1) the potential reduction in the amount of human contact; (2) an increase in the (...) feelings of objectification and loss of control; (3) a loss of privacy; (4) a loss of personal liberty; (5) deception and infantilisation; (6) the circumstances in which elderly people should be allowed to control robots. We conclude by balancing the care benefits against the ethical costs. If introduced with foresight and careful guidelines, robots and robotic technology could improve the lives of the elderly, reducing their dependence, and creating more opportunities for social interaction. (shrink)
This paper explores the relationship between dignity and robot care for older people. It highlights the disquiet that is often expressed about failures to maintain the dignity of vulnerable older people, but points out some of the contradictory uses of the word ‘dignity’. Certain authors have resolved these contradictions by identifying different senses of dignity; contrasting the inviolable dignity inherent in human life to other forms of dignity which can be present to varying degrees. The Capability Approach (CA) is introduced (...) as a different but tangible account of what it means to live a life worthy of human dignity. It is used here as a framework for the assessment of the possible effects of eldercare robots on human dignity. The CA enables the identification of circumstances in which robots could enhance dignity by expanding the set of capabilities that are accessible to frail older people. At the same time, it is also possible within its framework to identify ways in which robots could have a negative impact, by impeding the access of older people to essential capabilities. It is concluded that the CA has some advantages over other accounts of dignity, but that further work and empirical study is needed in order to adapt it to the particular circumstances and concerns of those in the latter part of their lives. (shrink)
Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total childcare is not yet being promoted, there are indications that it is 'on the cards'. We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, privacy, (...) robot use of restraint, deception of children and accountability. But the most pressing ethical issues throughout the paper concern the consequences for the psychological and emotional wellbeing of children. We set these in the context of the child development literature on the pathology and causes of attachment disorders. We then consider the adequacy of current legislation and international ethical guidelines on the protection of children from the overuse of robot care. (shrink)
One of the several reasons given in calls for the prohibition of autonomous weapons systems (AWS) is that they are against human dignity (Asaro, 2012; Docherty, 2014; Heyns, 2017; Ulgen, 2016). However there have been criticisms of the reliance on human dignity in arguments against AWS (Birnbacher, 2016; Pop, 2018; Saxton, 2016). This paper critically examines the relationship between human dignity and autonomous weapons systems. Three main types of objection to AWS are identified; (i) arguments based on technology and the (...) ability of AWS to conform to International Humanitarian Law; (ii) deontological arguments based on the need for human judgement and meaningful human control, including arguments based on human dignity; (iii) consequentialist reasons about their effects on global stability and the likelihood of going to war. An account is provided of the claims made about human dignity and AWS, of the criticisms of these claims, and of the several meanings of ‘dignity’. It is concluded that although there are several ways in which AWS can be said to be against human dignity, they are not unique in this respect. There are other weapons, and other technologies, that also compromise human dignity. Given this, and the ambiguities inherent in the concept, it is wiser to draw on several types of objections in arguments against AWS, and not to rely exclusively on human dignity. (shrink)
As robots are deployed in a widening range of situations, it is necessary to develop a clearer position about whether or not they can be trusted to make good moral decisions. In this paper, we take a realistic look at recent attempts to program and to train robots to develop some form of moral competence. Examples of implemented robot behaviours that have been described as 'ethical', or 'minimally ethical' are considered, although they are found to only operate in quite constrained (...) and limited application domains. There is a general recognition that current robots cannot be described as full moral agents, but it is less clear whether will always be the case. Concerns are raised about the insufficiently justified use of terms such as 'moral' and 'ethical' to describe the behaviours of robots that are often more related to safety considerations than to moral ones. Given the current state of the art, two possible responses are identified. The first involves continued efforts to develop robots that are capable of ethical behaviour. The second is to argue against, and to attempt to avoid, placing robots in situations that demand moral competence and an understanding of the surrounding social situation. There is something to be gained from both responses, but it is argued here that the second is the more responsible choice. (shrink)
The distribution of scarce healthcare resources is an increasingly important issue due to factors such as expensive ‘high tech’ medicine, longer life expectancies and the rising prevalence of chronic illness. Furthermore, in the current healthcare context lifestyle-related factors such as high blood pressure, tobacco use and obesity are believed to contribute significantly to the global burden of disease. As such, this paper focuses on an ongoing debate in the academic literature regarding the role of responsibility for illness in healthcare resource (...) allocation: should patients with self-caused illness receive lower priority in access to healthcare resources? This paper critically describes the lower priority debate's 12 key arguments and maps out their relationships. This analysis reveals that most arguments have been refuted and that the debate has stalled and remains unresolved. In conclusion, we suggest progression could be achieved by inviting multidisciplinary input from a range of stakeholders for the development of evidence-based critical evaluations of existing arguments and the development of novel arguments, including the outstanding rebuttals. (shrink)
Plans to automate killing by using robots armed with lethal weapons have been a prominent feature of most US military forces? roadmaps since 2004. The idea is to have a staged move from ?man-in-the-loop? to ?man-on-the-loop? to full autonomy. While this may result in considerable military advantages, the policy raises ethical concerns with regard to potential breaches of International Humanitarian Law, including the Principle of Distinction and the Principle of Proportionality. Current applications of remote piloted robot planes or drones offer (...) lessons about how automated weapons platforms could be misused by extending the range of legally questionable, targeted killings by security and intelligence forces. Moreover, the alleged moral disengagement by remote pilots will only be exacerbated by the use of autonomous robots. Leaders in the international community need to address the difficult legal and moral issues now, before the current mass proliferation of development reaches fruition. (shrink)
Current uses of robots in classrooms are reviewed and used to characterise four scenarios: Robot as Classroom Teacher; Robot as Companion and Peer; Robot as Care-eliciting Companion; and Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children’s’ privacy, especially when they masquerade as (...) their friends and companions, when sensors are used to measure children’s responses, and when records are kept. Social robots designed to appear as if they understand and care for humans necessarily involve some deception, and could increase the risk of reduced human contact. Children could form attachments to robot companions, or robot teachers and this could have a deleterious effect on their social development. There are also concerns about the ability, and use of robots to control or make decisions about children’s behaviour in the classroom. It is concluded that there are good reasons not to welcome fully fledged robot teachers, and that robot companions should be given a cautious welcome at best. The limited circumstances in which robots could be used in the classroom to improve the human condition by offering otherwise unavailable educational experiences are discussed. (shrink)
The internet is widely used for health information and support, often by vulnerable people. Internet-based research raises both familiar and new ethical problems for researchers and ethics committees. While guidelines for internet-based research are available, it is unclear to what extent ethics committees use these. Experience of gaining research ethics approval for a UK study (SharpTalk), involving internet-based discussion groups with young people who self-harm and health professionals is described. During ethical review, unsurprisingly, concerns were raised about the vulnerability of (...) potential participants. These were dominated by the issue of anonymity, which also affected participant safety and consent. These ethical problems are discussed, and our solutions, which included: participant usernames specific to the study, a closed website, private messaging facilities, a direct contact email to researchers, information about forum rules displayed on the website, a ‘report’ button for participants, links to online support, and a discussion room for forum moderators. This experience with SharpTalk suggests that an approach to ethics, which recognises the relational aspects of research with vulnerable people, is particularly useful for internet-based health research. The solutions presented here can act as guidance for researchers developing proposals and for ethics committees reviewing them. (shrink)
Insufficient attention has been paid to the use of robots in classrooms. Robot “teachers” are being developed, but because Kline ignores such technological developments, it is not clear how they would fit within her framework. It is argued here that robots are not capable of teaching in any meaningful sense, and should be deployed only as educational tools.
Clinician gate-keeping is the process whereby healthcare providers prevent access to eligible patients for research recruitment. This paper contends that clinician gate-keeping violates three principles that underpin international ethical guidelines: respect for persons or autonomy; beneficence or a favourable balance of risks and potential benefits; and justice or a fair distribution of the benefits and burdens of research. In order to stimulate further research and debate, three possible strategies are also presented to eliminate gate-keeping: partnership with professional researchers; collaborative research (...) design and clinician education. (shrink)
Individual form and relevant distinctions -- Reasons for affirming individual forms -- Types of essential structures -- Types of being -- Principles of individuality -- Individual form and mereology -- Challenges for individual forms -- Alternative accounts of individual form -- An alternative account revisited.
Arming uninhabited vehicles is an increasing trend. Widespread deployment can bring dangers for arms-control agreements and international humanitarian law. Armed UVs can destabilise the situation between potential opponents. Smaller systems can be used for terrorism. Using a systematic definition existing international regulation of armed UVs in the fields of arms control, export control and transparency measures is reviewed; these partly include armed UVs, but leave large gaps. For preventive arms control a general prohibition of armed UVs would be best. If (...) that is unattainable, several measures should be taken. An explicit prohibition of autonomous attack, that is without a human decision, should be added to IHL. Concerning armed UVs remotely controlled by a human soldier, recommendations differ according to type or mission. New kinds of uninhabited nuclear-weapon carriers should be banned. Space weapons should be prohibited in general. UVs smaller than 0.2–0.5 m should be banned. Bigger remotely controlled armed UVs not equipped with weapons of mass destruction should be subject to numerical limitations in various categories. For these the Treaty on Conventional Armed Forces in Europe is an important precedent. (shrink)
The theory of natural monopoly has been substantially transformed in previous years. Ina clear and straightforward style, Dr. Sharkey gives an integrated presentation of the modern approach to this subject. Although the book is mainly conceptual in nature, the final chapter on natural monopoly in the telecommunications industry shows the practical applications of the theory. After an historical survey of natural monopoly, there follows a chapter stating and explaining the main results as well as giving a preliminary overview of (...) the rest of the book, where concepts such as the subadditivity of costs, optimal pricing, sustainability, and destructive competition are presented. The essence of the subject is presented in a manner accessible to the general reader, though the book also provides a synthesis of the subject suitable for advanced students. (shrink)
Robot Nannies Get a Wheel in the Door.Noel Sharkey & Amanda Sharkey - 2010 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 11 (2):302-313.details
The Crying Shame of Robot Nannies.Noel Sharkey & Amanda Sharkey - 2010 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 11 (2):161-190.details
Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total childcare is not yet being promoted, there are indications that it is ‘on the cards’. We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, privacy, (...) robot use of restraint, deception of children and accountability. But the most pressing ethical issues throughout the paper concern the consequences for the psychological and emotional wellbeing of children. We set these in the context of the child development literature on the pathology and causes of attachment disorders. We then consider the adequacy of current legislation and international ethical guidelines on the protection of children from the overuse of robot care. (shrink)
At the core of the tort preemption cases before the U.S. Supreme Court is the extent to which state law can impose more stringent liability standards than federal law. The express preemption cases focus on whether the state law requirements are “different from, or in addition to” the federally imposed requirements. And the implied conflict preemption cases examine whether the state law standards are incompatible or at least at odds with the federal regulatory scheme. But the preemption cases in the (...) appellate pipeline - what I shall term the “second wave” of preemption cases - address a separate analytic question. Their focus is less on the substantive aspects of regulatory standards, and more on their enforcement. When can state tort law impose substantive duties or obligations that are “parallel” to federal requirements without thereby encroaching upon a federal agency’s discretionary enforcement prerogative? This is the new frontier in products liability preemption. My proposed model suggests that courts facing these new issues should solicit input from federal agencies before resolving them. The model thereby offers a hybrid private-public model for the regulation of health and safety. It advocates an extension of my “agency reference model” to the “enforcement preemption” context: courts should place more emphasis on FDA input when deciding whether tort requirements are “parallel” to federal dictates, and whether, even if they are, they nonetheless infringe on the federal agency’s discretionary enforcement prerogatives. Courts would thus seek guidance from federal agencies to determine whether a private right of action exists for the enforcement, via state law claims, of federal regulations. (shrink)
Although some authors claim that deception requires intention, we argue that there can be deception in social robotics, whether or not it is intended. By focusing on the deceived rather than the deceiver, we propose that false beliefs can be created in the absence of intention. Supporting evidence is found in both human and animal examples. Instead of assuming that deception is wrong only when carried out to benefit the deceiver, we propose that deception in social robotics is wrong when (...) it leads to harmful impacts on individuals and society. The appearance and behaviour of a robot can lead to an overestimation of its functionality or to an illusion of sentience or cognition that can promote misplaced trust and inappropriate uses such as care and companionship of the vulnerable. We consider the allocation of responsibility for harmful deception. Finally, we make the suggestion that harmful impacts could be prevented by legislation, and by the development of an assessment framework for sensitive robot applications. (shrink)
H. B. D. Kettlewell's field experiments on industrial melanism in the peppered moth, Biston betularia, have become the best known demonstration of natural selection in action. I argue that textbook accounts routinely portray this research as an example of controlled experimentation, even though this is historically misleading. I examine how idealized accounts of Kettlewell's research have been used by professional biologists and biology teachers. I also respond to some criticisms of David Rudge to my earlier discussions of this case study, (...) and I question Rudge's claims about the importance of purely observational studies for the eventual acceptance and popularization of Kettlewell's explanation for the evolution of industrial melanism. (shrink)