Skip to main content

Social Conformity and Its Convergence for Reinforcement Learning

  • Conference paper
Multiagent System Technologies (MATES 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6251))

Included in the following conference series:

Abstract

A dynamic environment whose behavior may change in time presents a challenge that agents located there will have to solve. Changes in an environment e.g. a market, can be quite drastic: from changing the dependencies of some products to add new actions to build new products. The agents working in this environment would have to be ready to embrace this changes to improve their performance which otherwise would be diminished. Also, they should try to cooperate or compete against others, when appropriated, to reach their goals faster than in an individual fashion, showing an always desirable emergent behavior. In this paper a reinforcement learning method proposal, guided by social interaction between agents, is presented. The proposal aims to show that adaptation is performed independently by the society, without explicitly reporting that changes have occurred by a central authority, or even by trying to recognize those changes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Sutton, R.S., Barto, A.G.: Reinforcement learning i: Introduction (1998)

    Google Scholar 

  2. Vidal, J.: Learning in multiagent systems: An introduction from a game-theoretic perspective. Adaptive Agents and Multi-Agent Systems, 562–562

    Google Scholar 

  3. Akchurina, N.: Multiagent reinforcement learning: algorithm converging to nash equilibrium in general-sum discounted stochastic games. In: AAMAS ’09: Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems, pp. 725–732 (2009)

    Google Scholar 

  4. Shoham, Y., Powers, R., Grenager, T.: Multi-agent reinforcement learning: a critical survey. In: AAAI Fall Symposium on Artificial Multi-Agent Learning, Citeseer (2004)

    Google Scholar 

  5. López-Paredes, A., Hernández-Iglesias, C., Gutiérrez, J.P.: Towards a new experimental socio-economics: Complex behaviour in bargaining. Journal of Socio-Economics 31(4), 423–429 (2002)

    Article  Google Scholar 

  6. Hu, J., Wellman, M.P.: Nash Q-learning for general-sum stochastic games. The Journal of Machine Learning Research 4, 1039–1069 (2003)

    MathSciNet  MATH  Google Scholar 

  7. Melo, F.S., Ribeiro, M.I.: Coordinated learning in multiagent MDPs with infinite state-space. Autonomous Agents and Multi-Agent Systems, 1–47

    Google Scholar 

  8. Ghosh, D., Sharman, R., Raghav Rao, H., Upadhyaya, S.: Self-healing systems–survey and synthesis. Decision Support Systems 42(4), 2164–2185 (2007)

    Article  Google Scholar 

  9. Hu, J., Wellman, M.P.: Multiagent reinforcement learning: Theoretical framework and an algorithm (1998)

    Google Scholar 

  10. Tsitsiklis, J.N., Van Roy, B.: Feature-based methods for large scale dynamic programming. Machine Learning, 59–94 (1994)

    Google Scholar 

  11. Gordon, G.J.: Stable function approximation in dynamic programming (1995)

    Google Scholar 

  12. Singh, S.P., Jaakkola, T., Jordan, M.I.: Reinforcement learning with soft state aggregation. In: Advances in Neural Information Processing Systems, vol. 7, pp. 361–368. MIT Press, Cambridge (1995)

    Google Scholar 

  13. Tateyama, T., Kawata, S., Shimomura, Y.: A Reinforcement Learning Algorithm for Continuous State Spaces using Multiple Fuzzy-ART Networks. In: International Joint Conference on SICE-ICASE, pp. 2445–2450 (2006)

    Google Scholar 

  14. Helleboogh, A., Vizzari, G., Uhrmacher, A., Michel, F.: Modeling dynamic environments in multi-agent simulation. Auton. Agents Multi-Agent Syst. 14(1), 87–116 (2007)

    Article  Google Scholar 

  15. Dignum, V., Dignum, F., Sonenberg, L.: Towards dynamic reorganization of agent societies. In: Proceedings of Workshop on Coordination in Emergent Agent Societies, pp. 22–27 (2004)

    Google Scholar 

  16. Hu, J., Wellman, M.P.: Multiagent reinforcement learning in stochastic games (1999), citeseer.ist.psu.edu/hu99multiagent.html

  17. Claus, C., Boutilier, C.: The dynamics of reinforcement learning in cooperative multiagent systems. In: Proceedings of the Fifteenth National Conference on Artificial Intelligence, pp. 746–752. AAAI Press, Menlo Park (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

García-Pardo, J.A., Soler, J., Carrascosa, C. (2010). Social Conformity and Its Convergence for Reinforcement Learning. In: Dix, J., Witteveen, C. (eds) Multiagent System Technologies. MATES 2010. Lecture Notes in Computer Science(), vol 6251. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-16178-0_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-16178-0_15

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-16177-3

  • Online ISBN: 978-3-642-16178-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics