Advertisement

Trusting Norms: A Conceptual Norms’ Trust Framework for Norms Adoption in Open Normative Multi-agent Systems

  • Nurzeatul Hamimah Abdul HamidEmail author
  • Mohd Sharifuddin Ahmad
  • Azhana Ahmad
  • Aida Mustapha
  • Moamin A. Mahmoud
  • Mohd Zaliman Mohd Yusoff
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 373)

Abstract

Norms regulate software agent coordination and behavior in multi-agent communities. Adopted norms have impacts on agent’s goals, plans, and actions. Although norms generate new goals that do not originate from an agent’s original target, they provide an orientation on how the agent should act in a society. Depending on situations, an agent needs a mechanism to detect correctly and adopt norms in a new society. Researchers have proposed mechanisms that enable agents to detect norms; however, their works entail agents that detect only one norm in an event. We argue that these approaches do not help agents’ decision making in cases of more than one set of norms detected in the event. To solve this problem, we introduce the concept of norm’s trust to help agents decide which detected norms are credible in a new environment. We propose a conceptual norm’s trust framework by inferring trust using filtering factors of adoption ratio, authority, reputation, norm salience and adoption risk to establish a trust value for each detected norms in the event. This value is then used by an agent in deciding to emulate, adopt or ignore the detected norms.

Keywords

Normative agents norm adoption norms’ trust trust in normative multi-agent systems 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Criado, N., Argente, E., Botti, V.: Open issues for normative multi-agent systems. AI Communications 24(3), 233–264 (2011)MathSciNetGoogle Scholar
  2. 2.
    Hollander, C.D., Wu, A.S.: The Current State of Normative Agent-Based Systems. Journal of Artificial Societies and Social Simulation 14(2), 6 (2011)Google Scholar
  3. 3.
    Luck, M., Mahmoud, S., Meneguzzi, F., Kollingbaum, M., Norman, T.J., Criado, N., Fagundes, M.S.: Normative Agents. In: Ossowski, S. (ed.) Agreement Technologies, vol. 8, pp. 209–220. Springer Netherlands (2013)Google Scholar
  4. 4.
    Castelfranchi, C., Dignum, F., Jonker, C.M., Treur, J.: Deliberative Normative Agents: Principles and Architecture. In: Jennings, N.R., Lespérance, Y. (eds.) ATAL 1999. LNCS (LNAI), vol. 1757, pp. 364–378. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  5. 5.
    Andrighetto, G., Castelfranchi, C., Mayor, E., McBreen, J., Lopez-Sanchez, M., Parsons, S. (Social) Norms Dynamics. In: Andrighetto, G., Governatori, G., Noriega, P., van der Torre, L.W.N. (eds.) Normative Multi-Agent Systems, Dagstuhl Follow-Ups, Schloss Dagstuhl, vol. 4, pp. 135–170 (2013)Google Scholar
  6. 6.
    Criado, N., Argente, E., Noriega, P., Botti, V.: Reasoning about norms under uncertainty in dynamic environments. International Journal of Approximate Reasoning (2014) (in press)Google Scholar
  7. 7.
    McDonald, R.I., Fielding, K.S., Louis, W.R.: Energizing and De-Motivating Effects of Norm-Conflict. Personality and Social Psychology Bulletin 39(1), 57–72 (2013)CrossRefGoogle Scholar
  8. 8.
    Mahmoud, M.A., Mustapha, A., Ahmad, M.S., Ahmad, A., Yusoff, M.Z.M., Hamid, N.H.A.: Potential Norms Detection in Social Agent Societies. In: Omatu, S., Neves, J., Rodriguez, J.M.C., Paz Santana, J.F., Gonzalez, S.R. (eds.) Distrib. Computing & Artificial Intelligence. AISC, vol. 217, pp. 419–428. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  9. 9.
    Huynh, T., Jennings, N., Shadbolt, N.: An integrated trust and reputation model for open multi-agent systems. Autonomous Agents and Multi-Agent Systems 13(2), 119–154 (2006)CrossRefGoogle Scholar
  10. 10.
    Conte, R., Castelfranchi, C., Dignum, F.P.M.: Autonomous Norm Acceptance. In: Papadimitriou, C., Singh, M.P., Müller, J.P. (eds.) ATAL 1998. LNCS (LNAI), vol. 1555, pp. 99–112. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  11. 11.
    Savarimuthu, B.T.R., Cranfiled, S., Purvis, M.A., Purvis, M.K.: Identifying prohibition norms in agent societies. Artificial Intelligence and Law 21(1), 1–46 (2013)CrossRefGoogle Scholar
  12. 12.
    Andrighetto, G., Villatoro, D., Conte, R.: Norm internalization in artificial societies. AI Communications 23(4), 325–339 (2010)MathSciNetGoogle Scholar
  13. 13.
    Pinyol, I., Sabater-Mir, J.: Computational trust and reputation models for open multi-agent systems: a review. Artificial Intelligence Review 40(1), 1–25 (2013)CrossRefGoogle Scholar
  14. 14.
    Jung, Y., Kim, M., Masoumzadeh, A., Joshi, J.B.D.: A survey of security issue in multi-agent systems. Artificial Intelligence Review 37(3), 239–260 (2012)CrossRefGoogle Scholar
  15. 15.
    Reuben, E., Riedl, A.: Enforcement of contribution norms in public good games with heterogeneous populations. Games and Economic Behavior 77(1), 122–137 (2013)CrossRefzbMATHMathSciNetGoogle Scholar
  16. 16.
    Conte, R., Paolucci, M.: Reputation in artificial societies: Social beliefs for social order, vol. 6. Springer (2002)Google Scholar
  17. 17.
    Jøsang, A., Ismail, R., Boyd, C.: A survey of trust and reputation systems for online service provision. Decision Support Systems 43(2), 618–644 (2007)CrossRefGoogle Scholar
  18. 18.
    Castelfranchi, C.: Falcone. R.: Trust theory: A socio-cognitive and computational model. Wiley Publishing (2010)Google Scholar
  19. 19.
    Falcone, R., Castelfranchi, C., Cardoso, H.L., Jones, A., Oliveira, E.: Norms and Trust. In: Ossowski, S. (ed.) Agreement Technologies, vol. 8, pp. 221–231. Springer Netherlands (2013)Google Scholar
  20. 20.
    Ramchurn, S.D., Huynh, D., Jennings, N.R.: Trust in multi-agent systems. The Knowledge Engineering Review 19(01), 1–25 (2004)CrossRefGoogle Scholar
  21. 21.
    McKnight, D.H., Chervany, N.L.: The Meanning of Trust, in Technical Report MISRC Working Paper Seires 96-041996, Management Information Systems Research Center, University of MinnesotaGoogle Scholar
  22. 22.
    Wooldridge, M.: An Introduction Multi-agent Sytems. John Wiley & Sons Ltd. (2009)Google Scholar
  23. 23.
    Therborn, G.: Back to Norms! on the Scope and Dynamics of Norms and Normative Action. Current Sociology 50(6), 863–880 (2002)CrossRefGoogle Scholar
  24. 24.
    Bicchieri, C.: The Grammar of Society: The Nature and Dynamic of Social Norms. Cambridge University Press, New York (2006)Google Scholar
  25. 25.
    Savarimuthu, B.T.R., Padget, J., Purvis, M.A.: Social Norm Recommendation for Virtual Agent Societies. In: Boella, G., Elkind, E., Savarimuthu, B.T.R., Dignum, F., Purvis, M.K. (eds.) PRIMA 2013. LNCS, vol. 8291, pp. 308–323. Springer, Heidelberg (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Nurzeatul Hamimah Abdul Hamid
    • 1
    Email author
  • Mohd Sharifuddin Ahmad
    • 1
  • Azhana Ahmad
    • 1
  • Aida Mustapha
    • 2
  • Moamin A. Mahmoud
    • 1
  • Mohd Zaliman Mohd Yusoff
    • 1
  1. 1.Center for Agent Technology, College of Information TechnologyUniversiti Tenaga NasionalSelangorMalaysia
  2. 2.Faculty of Computer Science and Information TechnologyUniversiti Tun Hussein OnnJohorMalaysia

Personalised recommendations