Ethics and Information Technology

, Volume 13, Issue 1, pp 39–51 | Cite as

Trust and multi-agent systems: applying the “diffuse, default model” of trust to experiments involving artificial agents



We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to other non-human entities. We then examine Margaret Urban Walker’s notions of “default trust” and “default, diffuse trust” to see how these concepts can inform our analysis of trust in the context of AAs. In the final section, we show how ethicists can improve their understanding of important features in the trust relationship by examining data resulting from a classic experiment involving AAs.


Artificial agents Default trust Diffuse trust Multi-agent systems Trust 



Artificial agents


Discount-based social-commitment policy


Ranking-based social-commitment policy


Shared plans intention reconciliation experiments



We are grateful to Frances Grodzinsky, as well as to two anonymous reviewers, for some helpful suggestions on an earlier draft of this essay.


  1. American Heritage College Dictionary. 4th edn. (2002). New York: Houghton Mifflin Company.Google Scholar
  2. Baier, A. (1986). Trust and antitrust. Ethics, 96(2), 231–260.CrossRefMathSciNetGoogle Scholar
  3. Camp, L. J. (2000). Trust and risk in internet commerce. Cambridge, MA: MIT Press.Google Scholar
  4. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.CrossRefGoogle Scholar
  5. Fried, C. (1990). Privacy: A rational context. In M. D. Ermann, M. B. Williams, & C. Guitierrez (Eds.), Computers, ethics, and society (pp. 51–63). New York: Oxford University Press.Google Scholar
  6. Gambetta, D. (1998). Can we trust trust? In D. Gambetta (Ed.), Trust: Making and breaking cooperative relations (pp. 213–238). New York: Blackwell.Google Scholar
  7. Grodzinsky, F. S., Miller, K. W., & Wolf, M. J. (2009). Developing artificial agents worthy of trust: Would you buy a used car from this artificial agent? In M. Bottis (Ed.), Proceedings of the eighth international conference on computer ethicsphilosophical enquiry (CEPE 2009) (pp. 288–302). Athens, Greece: Nomiki Bibliothiki.Google Scholar
  8. Grosz, B., Kraus, S., Sullivan, D. G., & Das, S. (2002). The influence of social norms and social consciousness on intention reconciliation. Artificial Intelligence, 142, 147–177.MATHCrossRefGoogle Scholar
  9. Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29.CrossRefGoogle Scholar
  10. Holton, R. (1994). Deciding to trust, coming to believe. Australasian Journal of Philosophy, 72, 63–76.CrossRefGoogle Scholar
  11. Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195–204.CrossRefGoogle Scholar
  12. Lim, H. C., Stocker, R., & Larkin, H. (2008). Review of trust and machine ethics research: Towards a bio-inspired computational model of ethical trust (CMET). In Proceedings of the 3rd international conference on bio-inspired models of network, information, and computing systems. Hyogo, Japan, November. 25–27, Article No. 8.Google Scholar
  13. Luhmann, N. (1979). Trust and power. Chichester, UK: John Wiley and Sons.Google Scholar
  14. Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21.CrossRefGoogle Scholar
  15. Murdoch, I. (1973). The black prince. Chatto and Windus.Google Scholar
  16. Nissenbaum, H. (2001). Securing trust online: Wisdom or oxymoron. Boston University Law Review, 81(3), 635–664.Google Scholar
  17. O’Neill, O. (2002). Autonomy and trust in bioethics. Cambridge, MA: Cambridge University Press.CrossRefGoogle Scholar
  18. Quine, W. V. O. (1953). Two dogmas of empiricism. In W. V. O. Quine (Ed.), From a logical point of view (pp. 20–46). Cambridge, MA: Harvard University Press.Google Scholar
  19. Simon, J. (2009). MyChoice & traffic lights of trustworthiness: Where epistemology meets ethics in developing tools for empowerment and reflexivity. In M. Bottis, (Ed.), Proceedings of the eighth international conference on computer ethicsphilosophical enquiry (CEPE 2009) (pp. 655–670). Athens, Greece: Nomiki Bibliothiki.Google Scholar
  20. Stahl, B. C. (2006). Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood and agency. Ethics and Information Technology, 8(4), 205–213.CrossRefGoogle Scholar
  21. Strawson, P. F. (1974). Freedom and resentment. In P. F. Strawson (Ed.), Freedom and resentment and other essays (pp. 1–28). New York: Routledge.Google Scholar
  22. Subrahamanian, V. S., Bonatti, J., Dix, J., Editor, T., Kraus, S., Ozcan, F., et al. (2000). Heterogeneous agent systems: Theory and implementation. Cambridge, MA: MIT Press.Google Scholar
  23. Taddeo, M. (2008). Modeling trust in artificial agents, a first step toward the analysis of e-trust. In Proceedings of the sixth European conference of computing and philosophy, University for Science and Technology, Montpelier, France. Reprinted in C. Ess, & M. Thorseth (Eds.). Trust and virtual worlds: Contemporary perspectives. Bern: Peter Lang (in press).Google Scholar
  24. Taddeo, M. (2009). Defining trust and e-trust: from old theories to new problems. International Journal of Technology and Human Interaction, 5(2), 23–25.CrossRefGoogle Scholar
  25. Walker, M. U. (2006). Moral repair: Reconstructing moral relations after wrongdoing. Cambridge, MA: Cambridge University Press.CrossRefGoogle Scholar
  26. Weckert, J. (2005). Trust in cyberspace. In R. Cavalier (Ed.), The impact of the internet on our moral lives (pp. 95–120). Albany, NY: State University of New York Press.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2010

Authors and Affiliations

  1. 1.Department of PhilosophyRutgers UniversityNewarkUSA
  2. 2.Saul Kripke Center, City University of New York-The Graduate CenterNew YorkUSA
  3. 3.Department of PhilosophyRivier CollegeNashuaUSA

Personalised recommendations