There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificialagents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of (...) research in this area has been on the artificialagents and the humans they may encounter after they are deployed. We contend that the humans who design, implement, and deploy the artificialagents are crucial to any discussion of e-trust and to understanding the distinctions among the concepts of trust, e-trust and face-to-face trust. (shrink)
Artificialagents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility (...) of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on mind-less morality we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the Method of Abstraction for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The Method of Abstraction is explained in terms of an interface or set of features or observables at a given LoA. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the transition rules by which state is changed) at a given LoA. Morality may be thought of as a threshold defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary cost of this facility is the extension of the class of agents and moral agents to embrace AAs. (shrink)
When is it ethically acceptable to use artificialagents in health care? This article articulates some criteria for good care and then discusses whether machines as artificialagents that take over care tasks meet these criteria. Particular attention is paid to intuitions about the meaning of ‘care’, ‘agency’, and ‘taking over’, but also to the care process as a labour process in a modern organizational and financial-economic context. It is argued that while there is in principle (...) no objection to using machines in medicine and health care, the idea of them functioning and appearing as ‘artificialagents’ is problematic and attends us to problems in human care which were already present before visions of machine care entered the stage. It is recommended that the discussion about care machines be connected to a broader discussion about the impact of technology on human relations in the context of modernity. (shrink)
Thinking about how the law might decide whether to extend legal personhood to artificialagents provides a valuable testbed for philosophical theories of mind. Further, philosophical and legal theorising about personhood for artificialagents can be mutually informing. We investigate two case studies, drawing on legal discussions of the status of artificialagents. The first looks at the doctrinal difficulties presented by the contracts entered into by artificialagents. We conclude that it (...) is not necessary or desirable to postulate artificialagents as legal persons in order to account for such contracts. The second looks at the potential for according sophisticated artificialagents with legal personality with attendant constitutional protections similar to those accorded to humans. We investigate the validity of attributes that have been suggested as pointers of personhood, and conclude that they will take their place within a broader matrix of pragmatic, philosophical and extra-legal concepts. (shrink)
In this paper, I discuss whether in a society where the use of artificialagents is pervasive, these agents should be recognized as having rights like those we accord to group agents. This kind of recognition I understand to be at once social and legal, and I argue that in order for an artificial agent to be so recognized, it will need to meet the same basic conditions in light of which group agents are (...) granted such recognition. I then explore the implications of granting recognition in this manner. The thesis I will be defending is that artificialagents that do meet the conditions of agency in light of which we ascribe rights to group agents should thereby be recognized as having similar rights. The reason for bringing group agents into the picture is that, like artificialagents, they are not self-evidently agents of the sort to which we would naturally ascribe rights, or at least that is what the historical record suggests if we look, for example, at what it took for corporations to gain legal status in the law as group agents entitled to rights and, consequently, as entities subject to responsibilities. This is an example of agency ascribed to a nonhuman agent, and just as a group agent can be described as nonhuman, so can an artificial agent. Therefore, if these two kinds of nonhuman agents can be shown to be sufficiently similar in relevant ways, the agency ascribed to one can also be ascribed to the other—this despite the fact that neither is human, a major impediment when it comes to recognizing an entity as an agent proper, and hence as a bearer of rights. (shrink)
Where artificialagents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificialagents should be acting as proxies (...) for low-level agents — e.g. individual users of the artificialagents — or whether they should be moral proxies for high-level agents — e.g. designers, distributors or regulators, that is, those who can potentially control the choice behaviour of many artificialagents at once. Who we think an artificial agent is a moral proxy for determines from which agential perspective the choice problems artificialagents will be faced with should be framed: should we frame them like the individual choice scenarios previously faced by individual human agents? Or should we, rather, consider the expected aggregate effects of the many choices made by all the artificialagents of a particular type all at once? This paper looks at how artificialagents should be designed to make risky choices, and argues that the question of risky choice by artificialagents shows the moral proxy problem to be both practically relevant and difficult. (shrink)
This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificialagents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis (...) of this phenomenon. The analysis first focuses on an agent’s trustworthiness , this one is presented as the necessary requirement for e-trust to occur. Then, a new definition of e-trust as a second-order-property of first-order relations is presented. It is shown that the second-order-property of e-trust has the effect of minimising an agent’s effort and commitment in the achievement of a given goal. On this basis, a method is provided for the objective assessment of the levels of e-trust occurring among the artificialagents of a distributed artificial system. (shrink)
Although there have been efforts to integrate Semantic Web technologies and artificialagents related AI research approaches, they remain relatively isolated from each other. Herein, we introduce a new ontology framework designed to support the knowledge representation of artificialagents’ actions within the context of the actions of other autonomous agents and inspired by standard cognitive architectures. The framework consists of four parts: 1) an event ontology for information pertaining to actions and events; 2) an (...) epistemic ontology containing facts about knowledge, beliefs, perceptions and communication; 3) an ontology concerning future intentions, desires, and aversions; and, finally, 4) a deontic ontology for modeling obligations and prohibitions which limit agents’ actions. The architecture of the ontology framework is inspired by deontic cognitive event calculus as well as epistemic and deontic logic. We also describe a case study in which the proposed DCEO ontology supports autonomous vehicle navigation. (shrink)
Artificialagents, particularly but not only those in the infosphere Floridi (Information – A very short introduction. Oxford University Press, Oxford, 2010a), extend the class of entities that can be involved in moral situations, for they can be correctly interpreted as entities that can perform actions with good or evil impact (moral agents). In this chapter, I clarify the concepts of agent and of artificial agent and then distinguish between issues concerning their moral behaviour vs. issues (...) concerning their responsibility. The conclusion is that there is substantial and important scope, particularly in information ethics, for the concept of moral artificialagents not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, which considers whether artificialagents may have mental states, feelings, emotions and so forth. By focussing directly on “mind-less morality”, one is able to by-pass such question as well as other difficulties arising in Artificial Intelligence, in order to tackle some vital issues in contexts where artificialagents are increasingly part of the everyday environment (Floridi L, Metaphilos 39(4/5): 651–655, 2008a). (shrink)
Strategies for improving the explainability of artificialagents are a key approach to support the understandability of artificialagents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificialagents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception (...) of artificialagents’ reliability. Particularly, this paper focuses on non-expert users’ perspectives, since users with little technical knowledge are likely to benefit the most from “post-hoc”, everyday explanations. Drawing upon the explainable AI and social sciences literature, this paper investigates how artificial agent’s explainability and trust are interrelated at different stages of an interaction. Specifically, the possibility of implementing explainability as a trust building, trust maintenance and restoration strategy is investigated. To this extent, the paper identifies and discusses the intrinsic limits and fundamental features of explanations, such as structural qualities and communication strategies. Accordingly, this paper contributes to the debate by providing recommendations on how to maximize the effectiveness of explanations for supporting non-expert users’ understanding and trust. (shrink)
In what ways should we include future humanoid robots, and other kinds of artificialagents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced (...) in their cognitive or informational capacities, are, it is proposed, unlikely to possess sentience and hence will fail to be able to exercise the kind of empathic rationality that is a prerequisite for being a moral agent. The organic view also argues that sentience and teleology require biologically based forms of self-organization and autonomous self-maintenance. The organic view may not be correct, but at least it needs to be taken seriously in the future development of the field of Machine Ethics. (shrink)
Standard notions in philosophy of mind have a tendency to characterize socio-cognitive abilities as if they were unique to sophisticated human beings. However, assuming that it is likely that we are soon going to share a large part of our social lives with various kinds of artificialagents, it is important to develop a conceptual framework providing notions that are able to account for various types of social agents. Recent minimal approaches to socio-cognitive abilities such as mindreading (...) and commitment present a promising starting point from which one can expand the field of application not only to infants and non-human animals but also to artificialagents. Developing a minimal approach to the sociocognitive ability of acting jointly, I present a foundation for future discussions about the question of how our conception of sociality can be expanded to artificialagents. (shrink)
Standard notions in philosophy of mind have a tendency to characterize socio-cognitive abilities as if they were unique to sophisticated human beings. However, assuming that it is likely that we are soon going to share a large part of our social lives with various kinds of artificialagents, it is important to develop a conceptual framework providing notions that are able to account for various types of social agents. Recent minimal approaches to socio-cognitive abilities such as mindreading (...) and commitment present a promising starting point from which one can expand the field of application not only to infants and non-human animals but also to artificialagents. Developing a minimal approach to the socio-cognitive ability of acting jointly, I present a foundation for future discussions about the question of how our conception of sociality can be expanded to artificialagents. (shrink)
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial (...)agents. One such question is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificialagents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility. (shrink)
A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of epistemology and metaphysics. We then (...) discuss ways in which philosophical problems of scepticism are related to the problems faced by knowledge representation. We suggest that some of the methods that philosophers have developed to address the problems of epistemology may be relevant to the problems of representing knowledge within artificialagents. (shrink)
In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificialagents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. (...) Essentially, the qualitative difference between ethical decisions and general decisions is that ethical decisions must be part of the process of developing ethical expertise within an agent. We use this distinction in examining publicity surrounding a particular experiment in which a simulated robot attempted to safeguard simulated humans from falling into a hole. We conclude that any suggestions that this simulated robot was making ethical decisions were misleading. (shrink)
We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificialagents (AAs) as well as to (...) other non-human entities. We then examine Margaret Urban Walker’s notions of “default trust” and “default, diffuse trust” to see how these concepts can inform our analysis of trust in the context of AAs. In the final section, we show how ethicists can improve their understanding of important features in the trust relationship by examining data resulting from a classic experiment involving AAs. (shrink)
Artificialagents such as robots are performing increasingly significant ethical roles in society. As a result, there is a growing literature regarding their moral status with many suggesting it is justified to regard manufactured entities as having intrinsic moral worth. However, the question of whether artificialagents could have the high degree of moral status that is attributed to human persons has largely been neglected. To address this question, the author developed a respect-based account of the (...) ethical criteria for the moral status of persons. From this account, the paper employs an empirical test that must be passed in order for artificialagents to be considered alongside persons as having the corresponding rights and duties. (shrink)
The Chinese room argument has presented a persistent headache in the search for Artificial Intelligence. Since it first appeared in the literature, various interpretations have been made, attempting to understand the problems posed by this thought experiment. Throughout all this time, some researchers in the Artificial Intelligence community have seen Symbol Grounding as proposed by Harnad as a solution to the Chinese room argument. The main thesis in this paper is that although related, these two issues present different (...) problems in the framework presented by Harnad himself. The work presented here attempts to shed some light on the relationship between John Searle’s intentionality notion and Harnad’s Symbol Grounding Problem. (shrink)
Artificialagents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a comparative analysis (...) of the literature on robots and virtual agents, we defend the thesis that approaching these artificialagents ‘as if’ they had intentions and forms of social, goal-oriented rationality is the only way to deal with their complexity on a daily base. Specifically, we claim that this is the only viable strategy for non-expert users to understand, predict and perhaps learn from artificialagents’ behavior in everyday social contexts. Furthermore, we argue that as long as agents are transparent about their design principles and functionality, attributing intentions to their actions is not only essential, but also ethical. Additionally, we propose design guidelines inspired by the debate over the adoption of the intentional stance. (shrink)
A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of epistemology and metaphysics. We then (...) discuss ways in which philosophical problems of scepticism are related to the problems faced by knowledge representation. We suggest that some of the methods that philosophers have developed to address the problems of epistemology may be relevant to the problems of representing knowledge within artificialagents. (shrink)
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial (...)agents. One such question is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificialagents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility. (shrink)
We propose an experimental method to study the possible emergence of sensemaking in artificialagents. This method involves analyzing the agent's behavior in a test bed environment that presents regularities in the possibilities of interaction afforded to the agent, while the agent has no presuppositions about the underlying functioning of the environment that explains such regularities. We propose a particular environment that permits such an experiment, called the Small Loop Problem. We argue that the agent's behavior demonstrates sensemaking (...) if the agent learns to exploit regularities of interaction to fulfill its self-motivation as if it understood (at least partially) the underlying functioning of the environment. As a corollary, we argue that sensemaking and self-motivation come together. We propose a new method to generate self-motivation in an artificial agent called interactional motivation. An interactionally motivated agent seeks to perform interactions with predefined positive values and avoid.. (shrink)
In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled (...) theories with respect to the prerequisites that an ICT must satisfy in order to count as a moral agent accountable for its behavior. I argue that each of the various elements of the necessary conditions for moral agency presupposes consciousness, i.e., the capacity for inner subjective experience like that of pain or, as Nagel puts it, the possession of an internal something-of-which-it is-is-to-be-like. I ultimately conclude that the issue of whether artificial moral agency is possible depends on the issue of whether it is possible for ICTs to be conscious. (shrink)
The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines than by (...) delegating to other people. Our results imply that the availability of artificialagents could provide stronger incentives for decision-makers to delegate sensitive decisions. (shrink)
This paper discusses the relation between intelligence and motivation in artificialagents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of (...) intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so. In combination, the two theses help us understand the possible range of behavior of superintelligent agents, and they point to some potential dangers in building such an agent. (shrink)
Following up on Thomas Nagel’s paper “What is it like to be a bat?” and Alan Turing’s essay “Computing machinery and intelligence,” it shall be claimed that a successful interaction of human beings and autonomous artificialagents depends more on which characteristics human beings ascribe to the agent than on whether the agent really has those characteristics. It will be argued that Masahiro Mori’s concept of the “uncanny valley” as well as evidence from several empirical studies supports that (...) assertion. Finally, some tentative conclusions concerning moral implications of the arguments presented here shall be drawn. (shrink)
Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks (...) for computational approaches to higher-order cognition. The need for increasingly autonomous artificialagents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent’s selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we will elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors. (shrink)
By comparing mechanisms in nativism, empiricism, and culturalism, the target article by Steels & Belpaeme (S&B) emphasizes the influence of communicational constraint on sharing color categories. Our commentary suggests deeper considerations of some of their claims, and discusses some modifications that may help in the study of communicational constraints in both humans and robots.
Unlike natural agents, artificialagents are, to varying extent, designed according to sets of principles or assumptions. We argue that the designers philosophical position on truth, belief and knowledge has far reaching implications for the design and performance of the resulting agents. Of the many sources of design information and background we believe philosophical theories are under-rated as valuable influences on the design process. To explore this idea we have implemented some computer-based agents with their (...) control algorithms inspired by two strongly contrasting philosophical positions. A series of experiments on these agents shows that, despite having common tasks and goals, the behaviour of the agents is markedly different and this can be attributed to their individual approaches to belief and knowledge. We discuss these findings and their support for the view that epistemological theories have a particular relevance for artificial agent design. (shrink)
The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary (...) people willing to ascribe deceptive intentions to artificialagents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive. (shrink)
The distinction between personal level explanations and subpersonal ones has been subject to much debate in philosophy. We understand it as one between explanations that focus on an agent’s interaction with its environment, and explanations that focus on the physical or computational enabling conditions of such an interaction. The distinction, understood this way, is necessary for a complete account of any agent, rational or not, biological or artificial. In particular, we review some recent research in Artificial Life that (...) pretends to do completely without the distinction, while using agent-centred concepts all the way. It is argued that the rejection of agent level explanations in favour of mechanistic ones is due to an unmotivated need to choose among representationalism and eliminativism. The dilemma is a false one if the possibility of a radical form of externalism is considered. (shrink)
This study investigates the acquisition of integrated object manipulation and categorization abilities through a series of experiments in which human adults and artificialagents were asked to learn to manipulate two-dimensional objects that varied in shape, color, weight, and color intensity. The analysis of the obtained results and the comparison of the behavior displayed by human and artificialagents allowed us to identify the key role played by features affecting the agent/environment interaction, the relation between category (...) and action development, and the role of cognitive biases originating from previous knowledge. (shrink)
Within the philosophy of biology, recently promising steps have been made towards a biologically grounded concept of agency. Agency is described as bio-agency: the intrinsically normative adaptive behaviour of human and non-human organisms, arising from their biological autonomy. My paper assesses the bio-agency approach by examining criticism recently directed by its proponents against the project of embodied robotics. Defenders of the bio-agency approach have claimed that embodied robots do not, and for fundamental reasons cannot, qualify as artificialagents (...) because they do not fully realise biological autonomy. More particularly, it has been claimed that embodied robots fail to be agents because agency essentially requires metabolism. I shall argue that this criticism, while being valuable in bringing to the fore important differences between bio-agents and existing embodied robots, nevertheless is too strong. It relies on inferences from agency-as-we-know-it to agency-as-it-could-be which are justified neither empirically nor conceptually. (shrink)
Within the philosophy of biology, recently promising steps have been made towards a biologically grounded concept of agency. Agency is described as bio-agency: the intrinsically normative adaptive behaviour of human and non-human organisms, arising from their biological autonomy. My paper assesses the bio-agency approach by examining criticism recently directed by its proponents against the project of embodied robotics. Defenders of the bio-agency approach have claimed that embodied robots do not, and for fundamental reasons cannot, qualify as artificialagents (...) because they do not fully realise biological autonomy. More particularly, it has been claimed that embodied robots fail to be agents because agency essentially requires metabolism. I shall argue that this criticism, while being valuable in bringing to the fore important differences between bio-agents and existing embodied robots, nevertheless is too strong. It relies on inferences from agency-as-we-know-it to agency-as-it-could-be which are justified neither empirically nor conceptually. (shrink)
Advances in artificial intelligence create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificialagents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. To answer these questions, the (...) paper argues that the precondition for actually understanding others consists in the implicit assumption of the subjectivity of our counterpart, which makes shared feelings and a „we-intentionality” possible. This assumption is ultimately based on the presupposition of a shared form of life, conceived here as „conviviality.” The possibility that future artificialagents could meet these preconditions is refuted on the basis of embodied and enactive cognition, which links subjectivity and consciousness to the aliveness of an organism. Even if subjectivity is in principle impossible for artificialagents, the distinction between simulated and real subjectivity might nevertheless become increasingly blurred. Here, possible consequences are discussed, especially using the example of virtual psychotherapy. Finally, the paper makes case for a mindful appproach to the language we use to talk about artificial systems and pleads for preventing a systematic pretense of subjectivity. (shrink)
Unlike natural agents, artificialagents are, to varying extent, designed according to sets of principles or assumptions. We argue that the designer’s philosophical position on truth, belief and knowledge has far reaching implications for the design and performance of the resulting agents. Of the many sources of design information and background we believe philosophical theories are under-rated as valuable influences on the design process. To explore this idea we have implemented some computer-based agents with their (...) control algorithms inspired by two strongly contrasting philosophical positions. A series of experiments on these agents shows that, despite having common tasks and goals, the behaviour of the agents is markedly different and this can be attributed to their individual approaches to belief and knowledge. We discuss these findings and their support for the view that epistemological theories have a particular relevance for artificial agent design. (shrink)
For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
One of the key problems the field of Machine Consciousness is currently facing is the need to accurately assess the potential level of consciousness that an artificial agent might develop. This paper presents a novel artificial consciousness scale designed to provide a pragmatic and intuitive reference in the evaluation of MC implementations. The version of ConsScale described in this work provides a comprehensive evaluation mechanism which enables the estimation of the potential degree of consciousness of most of the (...) existing artificial implementations. This scale offers both well defined levels of artificial consciousness and a method to calculate an orientative numerical score . A set of architectural and cognitive criteria is considered for each level of the scale. This permits the definition of a cognitive framework in which MC implementations can be ranked according to their potential capability to reproduce functional synergies associated with consciousness. The probability of the implementations having any phenomenal states related to the assessed functional synergy is not specifically addressed in this paper; nevertheless, it could be thoughtfully discussed elsewhere. (shrink)
The paper analyzes if it is possible to grow an I–Thou relation in the sense of Martin Buber with an artificial, conversational agent developed with Natural Language Processing techniques. The requirements for such an agent, the possible approaches for the implementation, and their limitations are discussed. The relation of the achievement of this goal with the Turing test is emphasized. Novel perspectives on the I–Thou and I–It relations are introduced according to the sociocultural paradigm and Mikhail Bakhtin’s dialogism, polyphony (...) inter-animation, and carnavalesque. The polyphonic model, the associated analysis method, and the support tools are introduced. Some ideas on how the polyphonic model may be used for the implementation of a computer application able to analyze some features of the existence of an I–Thou relation are included. (shrink)
The distinction between personal level explanations and subpersonal ones has been subject to much debate in philosophy. We understand it as one between explanations that focus on an agent’s interaction with its environment, and explanations that focus on the physical or computational enabling conditions of such an interaction. The distinction, understood this way, is necessary for a complete account of any agent, rational or not, biological or artificial. In particular, we review some recent research in Artificial Life that (...) pretends to do completely without the distinction, while using agent-centred concepts all the way. It is argued that the rejection of agent level explanations in favour of mechanistic ones is due to an unmotivated need to choose among representationalism and eliminativism. The dilemma is a false one if the possibility of a radical form of externalism is considered. (shrink)