Skip to main content
Log in

Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

This paper provides a new analysis of e-trust, trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. The analysis first focuses on an agent’s trustworthiness, this one is presented as the necessary requirement for e-trust to occur. Then, a new definition of e-trust as a second-order-property of first-order relations is presented. It is shown that the second-order-property of e-trust has the effect of minimising an agent’s effort and commitment in the achievement of a given goal. On this basis, a method is provided for the objective assessment of the levels of e-trust occurring among the artificial agents of a distributed artificial system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. http://www.lginternetfamily.co.uk/fridge.asp.

  2. http://www.nytimes.com/2004/09/21/national/21cameras.html.

  3. http://blog.wired.com/defense/2007/08/httpwwwnational.html.

  4. http://blog.wired.com/defense/2007/06/for_years_and_y.html.

  5. http://blog.wired.com/defense/2007/06/for_years_and_y.html.

  6. http://www.airforce-technology.com/projects/predator/.

  7. AAs are computational systems situated in a specific environment and able to adapt themselves to changes in it. They are also able to interact with the environment and with other agents, both human and artificial, and to act autonomously to achieve their goals. AAs are not endowed with mental states, feelings or emotions. For a more in depth analysis of the features of AAs see Floridi and Sanders (2004).

  8. These AAs are assumed to comply with the axioms of rational choice theory. The axioms are: (1) completeness: for any pair of alternatives (x and y), the AA either prefers x to y, prefers y to x, or is indifferent between x and y. (2) Transitivity: if an AA prefers x to y and y to z, then it necessarily prefers x to z. If it is indifferent between x and y, and indifferent between y and z, then it is necessarily indifferent between x and z. (3) Priority: the AA will choose the most preferred alternative. If the AA is indifferent between two or more alternatives that are preferred to all others, it will choose one of those alternatives, with the specific choice among them remaining indeterminate.

  9. http://en.wikipedia.org/wiki/WOT:_Web_of_Trust.

  10. These systems are widely diffused, there is a plethora of MAS able to perform tasks such as product brokering, merchant brokering and negotiation. Such systems are also able to address problems like security, trust, reputation, law, payment mechanisms, and advertising (Guttman et al. 1998; Nwana et al. 1998).

  11. The reader may consider this process similar to the one that occurs in e-commerce contexts where HAs are involved, like e-Bay for example.

  12. Note that “action” indicates here any performance of an AA, from, for example, controlling an unmanned vehicle to communicating information or data to another AA. For the role of trust in informative processes see (reference removed for double blind review).

  13. As the reader might already know a mini-max rule is a decision rule used in decision and game theory. The rule is used to maximise the minimum gain, or inversely to minimise the maximum loss.

  14. In the theory of level of abstraction (LoA), discrete mathematics is used to specify and analyse the behaviour of information systems. The definition of a LoA is this: given a well-defined set X of values, an observable of type X is a variable whose value ranges over X. A LoA consists of a collection of observables of given types. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. A LoA consists of a collection of observables, each with a well-defined possible set of values or outcomes. Each LoA makes possible an analysis of the system, the result of which is called a model of the system. Evidently, a system may be described at a range of LoAs and so can have a range of models. More intuitively, a LoA is comparable to an ‘interface’, which consists of a set of features, the observables.

References

  • Castelfranchi, C., & Falcone, R. (1998). Principles of trust for MAS: Cognitive anatomy, social importance, and quantification. In Third International Conference on Multi-Agent Systems (ICMAS’98). Paris: IEEE Computer Society.

  • Corritore, C. L., Kracher, B., Wiedenbeck, S. (2003). On-line trust: Concepts, evolving themes, a model. International Journal of Human-Computer Studies, 58(6), 737–758.

    Article  Google Scholar 

  • de Vries, P. (2006). Social presence as a conduit to the social dimensions of online trust. In W. IJsselsteijn, Y. de Kort, C. Midden, B. Eggen, & E. van den Hoven (Eds.), Persuasive technology (pp. 55–59). Berlin: Springer.

    Chapter  Google Scholar 

  • Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18(3), 303–329.

    Article  Google Scholar 

  • Floridi, L., & Sanders, J. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.

    Article  Google Scholar 

  • Gambetta, D. (1998). Can we trust trust? In D. Gambetta (Ed.), Trust: Making and breaking cooperative relations (pp. 213–238). Oxford: Blackwell.

    Google Scholar 

  • Guttman, R., Moukas, A., Maes, P. (1998). Agent-mediated electronic commerce: A survey. Knowledge Engineering Review, 13(2), 147–159.

    Article  Google Scholar 

  • Lagenspetz, O. (1992). Legitimacy and trust. Philosophical Investigations, 15(1), 1–21.

    Article  Google Scholar 

  • Luhmann, N. (1979). Trust and power. Chichester: Wiley.

    Google Scholar 

  • Nissenbaum, H. (2001). Securing trust online: Wisdom or oxymoron. Boston University Law Review, 81(3), 635–664.

    Google Scholar 

  • Nwana, H., Rosenschein, J., et al. (1998). Agent-mediated electronic commerce: Issues, challenges and some viewpoints. Autonomous Agents 98, ACM Press.

  • Papadopoulou, P. (2007). Applying virtual reality for trust-building e-commerce environments. Virtual Reality, 11(2–3), 107–127.

    Article  Google Scholar 

  • Seamons, K. E., Winslett, M., Yu, T., Lu, L., Jarvis, R. (2003). Protecting privacy during on-line trust negotiation. In R. Dingledine, P. Syverson, et al. (Eds.), Privacy enhancing technologies (pp. 249–253). Berlin: Springer.

    Google Scholar 

  • Taddeo, M. (2009). Defining trust and e-trust: Old theories and new problems. International Journal of Technology and Human Interaction (IJTHI), 5(2), 23–35.

    Google Scholar 

  • Tuomela, M., & Hofmann, S. (2003). Simulating rational social normative trust, predictive trust, and predictive reliance between agents. Ethics and Information Technology, 5(3), 163–176.

    Article  Google Scholar 

  • Weckert, J. (2005). Trust in cyberspace. In R. J. Cavalier (Ed.), The impact of the internet on our moral lives (pp. 95–120). Albany: University of New York Press.

    Google Scholar 

  • Wooldridge, M. (2002). An introduction to multiagent systems. Chichester: Wiley.

    Google Scholar 

Download references

Acknowledgments

I am very grateful to Terrell W. Bynum, Charles M. Ess, Luciano Floridi, and Matteo Turilli for their helpful suggestions and conversations on previous drafts on which this article is based. They are responsible only for the improvements not for any remaining mistake.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mariarosaria Taddeo.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Taddeo, M. Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust. Minds & Machines 20, 243–257 (2010). https://doi.org/10.1007/s11023-010-9201-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-010-9201-3

Keywords

Navigation