Key elements of Randolph Clarke's libertarian account of freedom that requires both agent-causation and non-deterministic event-causation in the production of free action is assessed with an eye toward determining whether agent-causal accounts can accommodate the truth of judgments of moral obligation.
The problem of freedom and determinism has vexed philosophers for several millennia, and continues to be a topic of lively debate today. One of the proposed solutions to the problem that has received a great deal of attention is the Theory of Agent Causation. While the theory has enjoyed its share of advocates, and perhaps more than its share of critics, the theory’s advocates and critics have always agreed on one thing: the Theory of Agent Causation is an (...) incompatibilist theory. That is, both believers and nonbelievers in the theory have taken it for granted that the most plausible version of the Theory of Agent Causation is one according to which freedom and determinism are incompatible. In fact, so entrenched is this assumption that no one on either side of the debate has ever questioned it. Yet it turns out that this assumption is wrong – the most plausible version of the Theory of Agent Causation is a compatibilist one. (shrink)
In Morals From Motives, Michael Slote defends an agent-based theory of right action according to which right acts are those that express virtuous motives like benevolence or care. Critics have claimed that Slote’s view— and agent-based views more generally— cannot account for several basic tenets of commonsense morality. In particular, the critics maintain that agent-based theories: (i) violate the deontic axiom that ought implies can , (ii) cannot allow for a person’s doing the right thing for the (...) wrong reason, and (iii) do not yield clear verdicts in a number of cases involving conflicting motives and motivational over-determination . In this paper I develop a new agent-based theory of right action designed to avoid the problems presented for Slote’s view. This view makes morally right action a matter of expressing an optimal balance of virtue over vice and commands agents in each situation to improve their degree of excellence to the greatest extent possible. (shrink)
In this paper, I argue that trying is the locus of freedom and moral responsibility. Thus, any plausible view of free and responsible action must accommodate and account for free tryings. I then consider a version of agent causation whereby the agent directly causes her tryings. On this view, the agent is afforded direct control over her efforts and there is no need to posit—as other agent-causal theorists do—an uncaused event. I discuss the potential advantages of (...) this sort of view, and its challenges. (shrink)
According to the “Textbook View,” there is an extensional dispute between consequentialists and deontologists, in virtue of the fact that only the latter defend “agent-relative” principles—principles that require an agent to have a special concern with making sure that she does not perform certain types of action. I argue that, contra the Textbook View, there are agent-neutral versions of deontology. I also argue that there need be no extensional disagreement between the deontologist and consequentialist, as characterized by (...) the Textbook View. (shrink)
This article provides an answer to the question: What is the function of cognition? By answering this question it becomes possible to investigate what are the simplest cognitive systems. It addresses the question by treating cognition as a solution to a design problem. It defines a nested sequence of design problems: (1) How can a system persist? (2) How can a system affect its environment to improve its persistence? (3) How can a system utilize better information from the environment to (...) select better actions? And, (4) How can a system reduce its inherent informational limitations to achieve more successful behavior? This provides a corresponding nested sequence of system classes: (1) autonomous systems, (2) (re)active autonomous systems, (3) informationally controlled autonomous systems (autonomous agents), and (4) cognitive systems. -/- This article provides the following characterization of cognition: The cognitive system is the set of mechanisms of an autonomous agent that (1) allow increase of the correlation and integration between the environment and the information system of the agent, so that (2) the agent can improve the selection of actions and thereby produce more successful behavior. -/- Finally, it shows that common cognitive capacities satisfy the characterization: learning, memory, representation, decision making, reasoning, attention, and communication. (shrink)
An idea that has attracted a lot of attention lately is the thought that consequentialism is a theory characterized basically by its agent neutrality.1 The idea, however, has also met with skepticism. In particular, it has been argued that agent neutrality cannot be what separates consequentialism from other types of theories of reasons for action, since there can be agent-neutral non-consequentialist theories as well as agent-relative consequentialist theories. I will argue in this paper that this last (...) claim is false. The paper is divided into four sections. Section one specifies two senses in which consequentialism is agent-neutral. Section two and three examine and reject, respectively, the claim that there are agent-relative consequentialist views as well as agent-neutral non-consequentialist views. I end the paper with some remarks on the plausibility, or better, the implausibility of characterizing consequentialism in terms other than agent neutrality. (shrink)
Symposium contribution on Mark Schroeder's Slaves of the Passions. Argues that Schroeder's account of agent-neutral reasons cannot be made to work, that the limited scope of his distinctive proposal in the epistemology of reasons undermines its plausibility, and that Schroeder faces an uncomfortable tension between the initial motivation for his view and the details of the view he develops.
In this paper, I criticize David McNaughton and Piers Rawling's formalization of the agent-relative/agent-neutral distinction. I argue that their formalization is unable to accommodate an important ethical distinction between two types of conditional obligations. I then suggest a way of revising their formalization so as to fix the problem.
Agent-relative restrictions prohibit minimizing violations: that is, they require us not to minimize the total number of their violations by violating them ourselves. Frances Kamm has explained this prohibition in terms of the moral worth of persons, which, in turn, she explains in terms of persons’ high moral status as inviolable beings. I press the following criticism of this account: even if minimizing violations are permissible, we need not have a lower moral status provided other determinants thereof boost it. (...) Thus, Kamm’s account is incomplete at best. And when, to address this incompleteness, it is insisted that our moral worth derives from specific moral statuses, the inviolability account comes to seem deficient because it begs the question against those who are not initially persuaded that minimizing violations are impermissible. (shrink)
In this paper we consider the concept of a self-aware agent. In cognitive science agents are seen as embodied and interactively situated in worlds. We analyse the meanings attached to these terms in cognitive science and robotics, proposing a set of conditions for situatedness and embodiment, and examine the claim that internal representational schemas are largely unnecessary for intelligent behaviour in animats. We maintain that current situated and embodied animats cannot be ascribed even minimal self-awareness, and offer a six (...) point definition of embeddedness, constituting minimal conditions for the evolution of a sense of self. This leads to further analysis of the nature of embodiment and situatedness, and a consideration of whether virtual animats in virtual worlds could count as situated and embodied. We propose that self-aware agents must possess complex structures of self-directed goals; multi-modal sensory systems and a rich repertoire of interactions with their worlds. Finally, we argue that embedded agents will possess or evolve local co-ordinate systems, or points of view, relative to their current positions in space and time, and have a capacity to develop an egocentric space. None of these capabilities are possible without powerful internal representational capacities. (shrink)
The aim of this paper is to discuss the “Framework for M&S with Agents” (FMSA) proposed by Zeigler et al. [2000, 2009] in regard to the diverse epistemological aims of agent simulations in social sciences. We first show that there surely are great similitudes, hence that the aim to emulate a universal “automated modeler agent” opens new ways of interactions between these two domains of M&S with agents. E.g., it can be shown that the multi-level conception at the (...) core of the FMSA is similar in both contexts: notions of “levels of system specifi cation”, “behavior of models”, “simulator”and “endomorphic agents” can be partially translated in the terms linked to the “denotational hierarchy” (DH) and recently introduced in a multi-level centered epistemology of M&S. Second, we suggest considering the question of “credibility” of agent M&S in social sciences when we do not try to emulate but only to simulate target systems. Whereas a stringent and standardized treatment of the heterogeneous internal relations (in the DH) between systems of formalisms is the key problem and the essential challenge in the scope of Agent M&S driven engineering, it is urgent too to address the problem of the external relations (and of the external validity, hence of the epistemic power and credibility) of such levels of formalisms in the specific domains of agent M&S in social sciences, especially when we intend to introduce the concepts of activity tracking. (shrink)
Why do agent-relative reasons have authority over us, reflective creatures? Reductive accounts base the normativity of agent-relative reasons on agent-neutral considerations like having parents caring especially for their own children serves best the interests of all children. Such accounts, however, beg the question about the source of normativity of agent-relative ways of reason-giving. In this paper, I argue for a non-reductive account of the reflective necessity of agent-relative concerns. Such an account will reveal an important (...) structural complexity of practical reasoning in general. Christine Korsgaard relates the rational binding force of practical reasons to the various identities or self-conceptions under which we value ourselves. The problem is that it is not clear why such self-conceptions would necessitate us rationally, given the fact that most of our identities are simply given. Perhaps, Harry Frankfurt is right in arguing that we are not only necessitated by reason, but also, and predominantly by what we love. I argue, however, that the necessities of love (in Frankfurts phrase) are not to be separated from, but should be seen as belonging to the necessities of reason. Our loves, concerns and related identities provide for a specific and important structure to practical reflection. They function on the background of reasoning, having a specific default role: they would lose their character as concerns, if there was a need for them to be cited on the foreground of deliberation or if there was a need to justify them. This does not mean that our deep concerns cannot be scrutinised. They can only be scrutinised in an indirect way, however, which explains their role in grounding the normativity of agent-relative reasons. It appears that this account can provide for a viable interpretation of Korsgaards argument about the foundational role of practical identities. (shrink)
In this paper, we first propose a simple formal language to specify types of agents in terms of necessary conditions for their announcements. Based on this language, types of agents are treated as ‘first-class citizens’ and studied extensively in various dynamic epistemic frameworks which are suitable for reasoning about knowledge and agent types via announcements and questions. To demonstrate our approach, we discuss various versions of Smullyan’s Knights and Knaves puzzles, including the Hardest Logic Puzzle Ever (HLPE) proposed by (...) Boolos (in Harv Rev Philos 6:62–65, 1996). In particular, we formalize HLPE and verify a classic solution to it. Moreover, we propose a spectrum of new puzzles based on HLPE by considering subjective (knowledge-based) agent types and relaxing the implicit epistemic assumptions in the original puzzle. The new puzzles are harder than the previously proposed ones in the literature, in the sense that they require deeper epistemic reasoning. Surprisingly, we also show that a version of HLPE in which the agents do not know the others’ types does not have a solution at all. Our formalism paves the way for studying these new puzzles using automatic model checking techniques. (shrink)
This paper addresses the problem of human–computer interactions when the computer can interpret and express a kind of human-like behavior, offering natural communication. A conceptual framework for incorporating emotions with rationality is proposed. A model of affective social interactions is described. The model utilizes the SAIBA framework, which distinguishes among several stages of processing of information. The SAIBA framework is extended, and a model is realized in human behavior detection, human behavior interpretation, intention planning, attention tracking behavior planning, and behavior (...) realization components. Two models of incorporating emotions with rationality into a virtual artifact are presented. The first one uses an implicit implementation of emotions. The second one has an explicit realization of a three-layered model of emotions, which is highly interconnected with other components of the system. Details of the model with implicit implementation of emotional behavior are shown as well as evaluation methodology and results. Discussions about the extended model of an agent are given in the final part of the paper. (shrink)
Now that complex Agent-Based Models and computer simulations spread over economics and social sciences - as in most sciences of complex systems -, epistemological puzzles (re)emerge. We introduce new epistemological concepts so as to show to what extent authors are right when they focus on some empirical, instrumental or conceptual significance of their model or simulation. By distinguishing between models and simulations, between types of models, between types of computer simulations and between types of empiricity obtained through a simulation, (...) section 2 gives the possibility to understand more precisely - and then to justify - the diversity of the epistemological positions presented in section 1. Our final claim is that careful attention to the multiplicity of the denotational powers of symbols at stake in complex models and computer simulations is necessary to determine, in each case, their proper epistemic status and credibility. (shrink)
Construction of a robot discoverer can be treated as the ultimate success of automated discovery. In order to build such an agent we must understand algorithmic details of the discovery processes and the representation of scientific knowledge needed to support the automation. To understand the discovery process we must build automated systems. This paper investigates the anatomy of a robot-discoverer, examining various components developed and refined to a various degree over two decades. We also clarify the notion of autonomy (...) of an artificial agent, and we discuss the ways in which machine discoverers become more autonomous. Finally we summarize the main principles useful in construction of automated discoverers and we discuss various possible limitations of automation. (shrink)
The use of computer simulation for building theoretical models in social science is introduced. It is proposed that agent-based models have potential as a “third way” of carrying out social science, in addition to argumentation and formalisation. With computer simulations, in contrast to other methods, it is possible to formalise complex theories about processes, carry out experiments and observe the occurrence of emergence. Some suggestions are offered about techniques for building agent-based models and for debugging them. A scheme (...) for structuring a simulation program into agents, the environment and other parts for modifying and observing the agents is described. The article concludes with some references to modelling tools helpful for building computer simulations. (shrink)
In this paper we address the problem of defining social roles in multi-agent systems. Social roles provide the basic structure of social institutions and organizations. We start from the properties attributed to roles both in the multi-agent systems and the Object Oriented community, and we use them in an ontological analysis of the notion of social role. We identify three main properties of social roles. First, they are definitionally dependent on the institution they belong to, i.e. the definition (...) of a role is given inside the definition of the institution. Second, they attribute powers to the agents playing them, like creating commitments for the institutions and the other roles. Third, they allow roles to play roles, in the same way as agents do. Using Input/Output logics, we propose a formalization of roles in multi-agent systems satisfying the three properties we identified. (shrink)
Compositional verification aims at managing the complexity of theverification process by exploiting compositionality of the systemarchitecture. In this paper we explore the use of a temporal epistemiclogic to formalize the process of verification of compositionalmulti-agent systems. The specification of a system, its properties andtheir proofs are of a compositional nature, and are formalized within acompositional temporal logic: Temporal Multi-Epistemic Logic. It isshown that compositional proofs are valid under certain conditions.Moreover, the possibility of incorporating default persistence ofinformation in a system, (...) is explored. A completion operation on aspecific type of temporal theories, temporal completion, is introducedto be able to use classical proof techniques in verification withrespect to non-classical semantics covering default persistence. (shrink)
In this paper we present a sequent calculus for the multi-agent system S5 m . First, we introduce a particularly simple alternative Kripke semantics for the system S5 m . Then, we construct a hypersequent calculus for S5 m that reflects at the syntactic level this alternative interpretation. We prove that this hypersequent calculus is theoremwise equivalent to the Hilbert-style system S5 m , that it is contraction-free and cut-free, and finally that it is decidable. All results are proved (...) in a purely syntactic way and the cut-elimination procedure yields an upper bound of ip 2 (n, 0) where ip 2 is an hyperexponential function of base 2. (shrink)
Two areas of importance for agents and multiagent systems are investigated: design of agent programming languages, and design of agent communication languages. The paper contributes in the above mentioned areas by demonstrating improved or novel applications for deontic logic and normative reasoning. Examples are taken from computer-supported cooperative work, and electronic commerce.
The objective of this work is to demonstrate how cooperative sharers and uncooperative free riders can be placed in different groups of an electronic society in a decentralised manner. We have simulated an agent-based open and decentralised P2P system which self-organises itself into different groups to avoid cooperative sharers being exploited by uncooperative free riders. This approach encourages sharers to move to better groups and restricts free riders into those groups of sharers without needing centralised control. Our approach is (...) suitable for current P2P systems that are open and distributed. Gossip is used as a social mechanism for information sharing which facilitates the formation of groups. Using multi-agent based simulations we demonstrate how the adaptive behaviour of agents lead to self-organisation. We have tested with varying the gossip level and checked its impact in the system’s behaviour. We have also investigated the impact of false gossip in this system where gossip is the medium for information sharing which leads to self-organisation. (shrink)
Infonorma is a multi-agent system that provides its users with recommendations of legal normative instruments they might be interested in. The Filter agent of Infonorma classifies normative instruments represented as Semantic Web documents into legal branches and performs content-based similarity analysis. This agent, as well as the entire Infonorma system, was modeled under the guidelines of MAAEM, a software development methodology for multi-agent application engineering. This article describes the Infonorma requirements specification, the architectural design solution for (...) those requirements, the detailed design of the Filter agent and the implementation model of Infonorma, according to the guidelines of the MAAEM methodology. (shrink)
The present paper stems from the biosemiotic modelling of individual artificial cognition proposed by Ferreira and Caldas (2012) but goes further by introducing the concept of Umwelt Overlap. The introduction of this concept is of fundamental importance making the present model closer to natural cognition. In fact cognition can only be viewed as a purely individual phenomenon for analytical purposes. In nature it always involves the crisscrossing of the spheres of action of those sharing the same environmental bubble. Plus, the (...) incorporation of that concept is vital to understand the complex semiosis that sustains collective tissues, societies, regulating collective cognition and consequently cooperative action. The concept of Umwelt Overlap broadens the range of applicability of the previous model to several distinct domains allowing for example for its application to multi-agent cooperative autonomous systems. In this paper a Middle Size League RoboCup soccer team is used as an example of a possible application. (shrink)
We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to other non-human (...) entities. We then examine Margaret Urban Walker’s notions of “default trust” and “default, diffuse trust” to see how these concepts can inform our analysis of trust in the context of AAs. In the final section, we show how ethicists can improve their understanding of important features in the trust relationship by examining data resulting from a classic experiment involving AAs. (shrink)
A formal model for updates—the result of learning that the world has changed—in a multi-agent setting is presented and completely axiomatized. The model allows that several agents simultaneously are informed of an event in the world in such a way that it becomes common knowledge among the agents that the event has occurred. The model shares many features with the model for common announcements—an announcement about the state of the world in which it becomes common knowledge among the audience (...) that the announcement has been made—presented in Cantwell (2005), but exploits the difference between learning that a state of the world obtains and learning that the state of the world has changed. (shrink)
We use the example of the introduction of the anti-smoking legislation to model the relationship between the cultural make-up, in terms of values, of societies and the acceptance of and compliance with norms. We present two agent-based simulations and discuss the challenge of modeling sanctions and their relation to values and culture.
A socio-cognitive approach to trust can help us envisage a notion of networked trust for multi-agent systems (MAS) based on different interacting agents. In this framework, the issue is to evaluate whether or not a socio-cognitive analysis of trust can apply to the interactions between human and autonomous agents. Two main arguments support two alternative hypothesis; one suggests that only reliance applies to artificial agents, because predictability of agents’ digital interaction is viewed as an absolute value and human relation (...) is judged to be a necessary requirement for trust. The other suggests that trust may apply to autonomous agents because predictability of agents’ interaction is viewed only as a relative value since the digital normativity that grows out of the communication process between interacting agents in MAS has always deal with some unpredictable outcomes (reduction of uncertainty). Furthermore, human touch is not judged to be a necessary requirement for trust. In this perspective, a diverse notion of trust is elaborated, as trust is no longer conceived only as a relation between interacting agents but, rather, as a relation between cognitive states of control and lack of control (double bind). (shrink)
A thematic priority of the European Unionâs Framework V research and development programme was the creation of a user-friendly information society which met the needs of citizens and enterprises. In practice, though, for example in the case of on-line digital music, the needs of citizens and enterprises may be in conflict. This paper proposes to leverage the appearance of âintelligenceâ in the platform layer of a layered communications architecture to avoid such conflicts in similar applications in the future. The key (...) idea is that if the intelligence is encapsulated in an agent, then the agents should be organized as a society, and then the rules of the society can be used to ensure âresponsibleâ behaviour. We discuss how an agent society can be used to regulate behaviour in future information trading scenarios, and conclude that this approach offers a âthird wayâ which can satisfy the (reasonable) needs of both citizens and enterprises in the user-friendly information society. (shrink)
In this article, we show that behavioral features can be obtained at a group level even if they do not appear at the individual level. Starting from a standard model of Pareto optimal allocations, with expected utility maximizers but allowing for heterogeneity among individual beliefs, we show in particular that the representative agent has an inverse S-shaped probability distortion function as in Cumulative prospect theory.
I propose an Aristotelian approach to agent causation that is consistent with the hypothesis of strong emergence. This approach motivates a wider ontology than materialism by maintaining (1) that the agent is generated by the brain without being reducible to it on grounds of the unity of experience and (2) that the agent possesses (formal) causal power to affect (i.e., mold, sculpt, or organize) the brain on grounds of agent-directed neuroplasticity. After providing recent empirical evidence for (...) the strong emergence of the agent, I then articulate and analyze a dominant objection to agent causation discussed in neuroscience, which is based upon the observation of the readiness potential (or RP) in the brain. In this context, the RP refers to unconscious neuronal events (in the supplementary motor area) that precede the formation of a (proximal) conscious intention to act. So it appears as if the train of neuronal events has left the depot before the agent can act. In response to this objection, I argue (a) that even if one were to grant that the RP precedes the formation of a conscious intention, it would not follow (on both logical and empirical grounds) that there is no conscious agent causation; and (b) that the objection disappears when one takes into account distal versus proximal intentions. (shrink)
We present a generic denotational semantic framework for protocols for dialogs between rational and autonomous agents over action which allows for retraction and revocation of proposals for action. The semantic framework views participants in a deliberation dialog as jointly and incrementally manipulating the contents of shared spaces of action-intention tokens. The framework extends prior work by decoupling the identity of an agent who first articulates a proposal for action from the identity of any agent then empowered to retract (...) or revoke the proposal, thereby permitting proposals, entreaties, commands, promises, etc., to be distinguished semantically. (shrink)
In normative multi-agent systems, the question of “how an agent identifies norms in an open agent society” has not received much attention. This paper aims at addressing this question. To this end, this paper proposes an architecture for norm identification for an agent. The architecture is based on observation of interactions between agents. This architecture enables an autonomous agent to identify prohibition norms in a society using the prohibition norm identification (PNI) algorithm. The PNI algorithm (...) uses association rule mining, a data mining approach to identify sequences of events as candidate norms. When a norm changes, an agent using our architecture will be able to modify the norm and also remove a norm if it does not hold in the society. Using simulations of a park scenario we demonstrate how an agent makes use of the norm identification framework to identify prohibition norms. (shrink)
Following up on Thomas Nagel’s paper “What is it like to be a bat?” and Alan Turing’s essay “Computing machinery and intelligence,” it shall be claimed that a successful interaction of human beings and autonomous artificial agents depends more on which characteristics human beings ascribe to the agent than on whether the agent really has those characteristics. It will be argued that Masahiro Mori’s concept of the “uncanny valley” as well as evidence from several empirical studies supports that (...) assertion. Finally, some tentative conclusions concerning moral implications of the arguments presented here shall be drawn. (shrink)
Formal dialogue games have been studied in philosophy since at least the time of Aristotle. Recently they have been applied in various contexts in computer science and artificial intelligence, particularly as the basis for interaction between autonomous software agents. We review these applications and discuss the many open research questions and challenges at this exciting interface between philosophy and computer science.