References in:
Add references
You must login to add references.
|
|
This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. (...) |
|
Ethics is ordinarily understood as being concerned with questions of responsibility for and in the face of an other. This other is more often than not conceived of as another human being and, as such, necessarily excludes others – most notably animals and machines. This essay examines the ethics of such exclusivity. It is divided into three parts. The first part investigates the exclusive anthropocentrism of traditional forms of moral␣thinking and, following the example of recent innovations in animal rights philosophy, (...) |
|
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...) |
|
There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in (...) |
|
Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...) |
|
|
|
The paper provides a selective analysis of the main theories of trust and e-trust (that is, trust in digital environments) provided in the last twenty years, with the goal of preparing the ground for a new philosophical approach to solve the problems facing them. It is divided into two parts. The first part is functional toward the analysis of e-trust: it focuses on trust and its definition and foundation and describes the general background on which the analysis of e-trust rests. (...) |
|
No categories |
|
|
|
|