We study whether robots can satisfy the conditions of an agent fit to be held morally responsible, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. We employ Mele’s history-sensitive account of autonomy and responsibility to argue that even if robots were to have all the capacities required of moral agency, their history would deprive them from autonomy in a responsibility-undermining way. (...) We will also study whether humans and technological artifacts like robots can form hybrid collective agents that could be morally responsible for their actions and give an argument against such a possibility. (shrink)
Page 1. Economics and Philosophy, 26 291--320 Copyright C Cambridge University Press doi: 10.1017 / S0266267110000386 TWO KINDS OF WE-REASONING RAUL HAKLI, KAARLO MILLER AND RAIMO TUOMELA University of Helsinki.
Various sources in the literature claim that the deduction theorem does not hold for normal modal or epistemic logic, whereas others present versions of the deduction theorem for several normal modal systems. It is shown here that the apparent problem arises from an objectionable notion of derivability from assumptions in an axiomatic system. When a traditional Hilbert-type system of axiomatic logic is generalized into a system for derivations from assumptions, the necessitation rule has to be modified in a way that (...) restricts its use to cases in which the premiss does not depend on assumptions. This restriction is entirely analogous to the restriction of the rule of universal generalization of first-order logic. A necessitation rule with this restriction permits a proof of the deduction theorem in its usual formulation. Other suggestions presented in the literature to deal with the problem are reviewed, and the present solution is argued to be preferable to the other alternatives. A contraction-and cut-free sequent calculus equivalent to the Hubert system for basic modal logic shows the standard failure argument untenable by proving the underivability of DA from A. (shrink)
Endorsing the idea of group knowledge seems to entail the possibility of group belief as well, because it is usually held that knowledge entails belief. It is here studied whether it would be possible to grant that groups can have knowledge without being committed to the controversial view that groups can have beliefs. The answer is positive on the assumption that knowledge can be based on acceptance as well as belief. The distinction between belief and acceptance can be seen as (...) a refinement of the ordinary language concept of belief, and it may be useful in understanding the nature of epistemic justification and classifying various types of epistemic subjects. (shrink)
Epistemic justification of non-summative group beliefs is studied in this paper. Such group beliefs are understood to be voluntary acceptances, the justification of which differs from that of involuntary beliefs. It is argued that whereas epistemic evaluation of involuntary beliefs can be seen not to require reasons, justification of voluntary acceptance of a proposition as true requires that the agent, a group or an individual, can provide reasons for the accepted view. This basic idea is studied in relation to theories (...) of dialectical justification in which justification is taken to require ability to justify. Since the reasons offered can in principle always be challenged, there is no ultimate end to the dialectical chain of justification. This makes justification of acceptance, and thus group belief, social and, in a way, contextual, but this does not seem to entail strong forms of epistemic relativism. (shrink)
A proof-theoretical treatment of collectively accepted group beliefs is presented through a multi-agent sequent system for an axiomatization of the logic of acceptance. The system is based on a labelled sequent calculus for propositional multi-agent epistemic logic with labels that correspond to possible worlds and a notation for internalized accessibility relations between worlds. The system is contraction- and cut-free. Extensions of the basic system are considered, in particular with rules that allow the possibility of operative members or legislators. Completeness with (...) respect to the underlying Kripke semantics follows from a general direct and uniform argument for labelled sequent calculi extended with mathematical rules for frame properties. As an example of the use of the calculus we present an analysis of the discursive dilemma. (shrink)
We study whether robots can satisfy the conditions for agents fit to be held responsible in a normative sense, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. On the basis of Alfred R. Mele’s history-sensitive account of autonomy and responsibility it can be argued that even if robots were to have all the capacities usually required of moral agency, their history (...) as products of engineering would undermine their autonomy and thus responsibility. (shrink)
Attitudes toward robots influence the tendency to accept or reject robotic devices. Thus it is important to investigate whether and how attitudes toward robots can change. In this pilot study we investigate attitudinal changes in elderly citizens toward a tele-operated robot in relation to three parameters: (i) the information provided about robot functionality, (ii) the number of encounters, (iii) personality type. Fourteen elderly residents at a rehabilitation center participated. Pre-encounter attitudes toward robots, anthropomorphic thinking, and personality were assessed. Thereafter the (...) participants interacted with a tele-operated robot (Telenoid) during their lunch (c. 30 min.) for up to 3 days. Half of the participants were informed that the robot was tele-operated (IC) whilst the other half were naïve to its functioning (UC). Post-encounter assessments of attitudes toward robots and anthropomorphic thinking were undertaken to assess change. Attitudes toward robots were assessed with a new generic 35-items questionnaire (attitudes toward social robots scale: ASOR-5), offering a differentiated conceptualization of the conditions for social interaction. There was no significant difference between the IC and UC groups in attitude change toward robots though trends were observed. Personality was correlated with some tendencies for attitude changes; Extraversion correlated with positive attitude changes to intimate-personal relatedness with the robot (r = 0.619) and to psychological relatedness (r = 0.581) whilst Neuroticism correlated negatively (r = -0.582) with mental relatedness with the robot. The results tentatively suggest that neither information about functionality nor direct repeated encounters are pivotal in changing attitudes toward robots in elderly citizens. This may reflect a cognitive congruence bias where the robot is experienced in congruence with initial attitudes, or it may support action-based explanations of cognitive dissonance reductions, given that robots, unlike computers, are not yet perceived as action targets. Specific personality traits may be indicators of attitude change relating to specific domains of social interaction. Implications and future directions are discussed. (shrink)
Background: The surge in the development of social robots gives rise to an increased need for systematic methods of assessing attitudes towards robots. Aim: This study presents the development of a questionnaire for assessing attitudinal stance towards social robots: the ASOR. Methods: The 37-item ASOR questionnaire was developed by a task-force with members from different disciplines. It was founded on theoretical considerations of how social robots could influence five different aspects of relatedness. Results: Three hundred thirty-nine people responded to the (...) survey. Factor analysis of the ASOR yielded a three-factor solution consisting of a total of 25 items: “ascription of mental capacities”, “ascription of socio-practical capacities”, and “ascription of socio-moral status”. This data was further triangulated with data from interviews (n = 10). Conclusion: the ASOR allows for assessment of three distinct facets of ascription of capacities to social robots and offers a new type of assessment of attitudes towards social robots. It appeared that ASOR not only assesses ascription of capacities to social robots but it also gauged overall positive attitudes towards social robots. (shrink)
Francesco Guala has written an important book proposing a new account of social institutions and criticizing existing ones. We focus on Guala’s critique of collective acceptance theories of institutions, widely discussed in the literature of collective intentionality. Guala argues that at least some of the collective acceptance theories commit their proponents to antinaturalist methodology of social science. What is at stake here is what kind of philosophizing is relevant for the social sciences. We argue that a Searlean version of collective (...) acceptance theory can be defended against Guala’s critique and question the sufficiency of Guala’s account of the ontology of the social world. (shrink)
The robotics industry is growing rapidly, and to a large extent the development of this market sector is due to the area of social robotics – the production of robots that are designed to enter the space of human social interaction, both physically and semantically. Since social robots present a new type of social agent, they have been aptly classified as a disruptive technology, i.e. the sort of technology which affects the core of our current social practices and might lead (...) to profound cultural and social change. -/- Due to its disruptive and innovative potential, social robotics raises not only questions about utility, ethics, and legal aspects, but calls for “robo-philosophy” – the comprehensive philosophical reflection from the perspectives of all philosophical disciplines. This book presents the proceedings of the first conference in this new area, “Robo-Philosophy 2014 – Sociable Robots and the Future of Social Relations", held in Aarhus, Denmark, in August 2014. The short papers and abstracts collected here address questions of social robotics from the perspectives of philosophy of mind, social ontology, ethics, meta-ethics, political philosophy, aesthetics, intercultural philosophy, and metaphilosophy. -/- Social robotics is still in its early stages, but it is precisely now that we need to reflect its possible cultural repercussions. This book is accessible to a wide readership and will be of interest to everyone involved in the development and use of social robotics applications, from social roboticists to policy makers. (shrink)
This volume offers eleven philosophical investigations into our future relations with social robots--robots that are specially designed to engage and connect with human beings. The contributors present cutting edge research that examines whether, and on which terms, robots can become members of human societies. Can our relations to robots be said to be "social"? Can robots enter into normative relationships with human beings? How will human social relations change when we interact with robots at work and at home? The authors (...) of this volume explore these questions from the perspective of philosophy, cognitive science, psychology, and robotics. The first three chapters offer a taxonomy for the classification of simulated social interactions, investigate whether human social interactions with robots can be genuine, and discuss the significance of social relations for the formation of human individuality. Subsequent chapters clarify whether robots could be said to actually follow social norms, whether they could live up to the social meaning of care in caregiving professions, and how we will need to program robots so that they can negotiate the conventions of human social space and collaborate with humans. Can we perform joint actions with robots, where both sides need to honour commitments, and how will such new commitments and practices change our regional cultures? The authors connect research in social robotics and empirical studies in Human-Robot Interaction to recent debates in social ontology, social cognition, as well as ethics and philosophy of technology. -/- The book is a response to the challenge that social robotics presents for our traditional conceptions of social interaction, which presuppose such essential capacities as consciousness, intentionality, agency, and normative understanding. The authors develop insightful answers along new interdisciplinary pathways in "robophilosophy," a new research area that will help us to shape the "robot revolution," the distinctive technological change of the beginning 21st century. (shrink)
This special section originates from a workshop `New Horizons in Action and Agency’ that we organized in August 2019 at the University of Helsinki, Finland. The aim of the workshop was to provide a...
The aim of this paper is to present a philosophically inspired list of minimal requirements for social agency that may serve as a guideline for social robotics. Such a list does not aim at detailing the cognitive processes behind sociality but at providing an implementation-free characterization of the capacities and skills associated with sociality. We employ the notion of intentional stance as a methodological ground to study intentional agency and extend it into a social stance that takes into account social (...) features of behavior. We discuss the basic requirements of sociality and different ways to understand them, and suggest some potential benefits of understanding them in an instrumentalist way in the context of social robotics. (shrink)
This collection does not only include articles by Raimo Tuomela and his co-authors which have been decisive in social ontology. An extensive introduction provides an account of the impact of the works, the most important debates in the field, and also addresses future issues. Thus, the book gives insights that are still viable and worthy of further scrutiny and development, making it an inspiring source for those engaged in the debates of the field today.
In philosophical action theory there is a wide agreement that intentions, often understood in terms of plans, play a major role in the deliberation of rational agents. Planning accounts of rational agency challenge game- and decision-theoretical accounts in that they allow for rationality of actions that do not necessarily maximize expected utility but instead aim at satisfying long-term goals. Another challenge for game-theoretical understanding of rational agency has recently been put forth by the theory of team reasoning in which the (...) agents select their actions by doing their parts in the collective action that is best for the group. Both planning and team reasoning can be seen as instances of a similar type of reasoning in which actions are selected on the basis of an evaluation of a larger unit than an individual’s momentary act. In recent theories of collective agency, both planning and team reasoning have been defended against orthodox game theory, but, interestingly, by different authors: Raimo Tuomela has defended team reasoning in his theory of group agency, but he ignores temporally extended planning in this context. Michael Bratman has extended his theory of planning to the case of shared agency, but he does not seem to see a role for team reasoning in understanding shared intentional activities. In this paper, we argue that both accounts suffer from this one-sidedness. We aim to combine the main insights of Tuomela’s we-mode approach and Bratman’s planning approach into a fruitful synthesis that we think is necessary for understanding the nature of group agency. (shrink)
Background: The surge in the development of social robots gives rise to an increased need for systematic methods of assessing attitudes towards robots. Aim: This study presents the development of a questionnaire for assessing attitudinal stance towards social robots: the ASOR. Methods: The 37-item ASOR questionnaire was developed by a task-force with members from different disciplines. It was founded on theoretical considerations of how social robots could influence five different aspects of relatedness. Results: Three hundred thirty-nine people responded to the (...) survey. Factor analysis of the ASOR yielded a three-factor solution consisting of a total of 25 items: “ascription of mental capacities”, “ascription of socio-practical capacities”, and “ascription of socio-moral status”. This data was further triangulated with data from interviews (n = 10). Conclusion: the ASOR allows for assessment of three distinct facets of ascription of capacities to social robots and offers a new type of assessment of attitudes towards social robots. It appeared that ASOR not only assesses ascription of capacities to social robots but it also gauged overall positive attitudes towards social robots. (shrink)