Abstract
Multiagent systems have had a powerful impact on the real world. Many of the systems it studies (air traffic, satellite coordination, rover exploration) are inherently multi-objective, but they are often treated as single-objective problems within the research. A very important concept within multiagent systems is that of credit assignment: clearly quantifying an individual agent’s impact on the overall system performance. In this work we extend the concept of credit assignment into multi-objective problems, broadening the traditional multiagent learning framework to account for multiple objectives. We show in two domains that by leveraging established credit assignment principles in a multi-objective setting, we can improve performance by (i) increasing learning speed by up to 10x (ii) reducing sensitivity to unmodeled disturbances by up to 98.4% and (iii) producing solutions that dominate all solutions discovered by a traditional team-based credit assignment schema. Our results suggest that in a multiagent multi-objective problem, proper credit assignment is as important to performance as the choice of multi-objective algorithm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Agogino, A.K., Tumer, K.: Analyzing and visualizing multi-agent rewards in dynamic and stochastic domains. JAAMAS 17(2), 320–338 (2008)
Arthur, W.B.: Inductive reasoning and bounded rationality (the El Farol Problem). American Economic Review 84(406) (1994)
Damiani, S., Verfaillie, G., Charmeau, M.-C.: An earth watching satellite constellation: How to manage a team of watching agents with limited communications. In: AAMAS (2005)
Fonseca, C.M., Fleming, P.J.: On the performance assessment and comparison of stochastic multiobjective optimizers. In: Ebeling, W., Rechenberg, I., Voigt, H.-M., Schwefel, H.-P. (eds.) PPSN 1996. LNCS, vol. 1141, pp. 584–593. Springer, Heidelberg (1996)
Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: A survey. Journal of Artificial Intelligence Research (1996)
Pareto, V.: Manual of Political Economy. MacMillan Press Ltd. (1927)
Rubenstein, M., Cabrera, A., Werfel, J., Habibi, G., McLurkin, J., Nagpal, R.: Collective transport of complex objects by simple robots: Theory and experiments. In: AAMAS (2013)
Sherstov, A.A., Stone, P.: Function approximation via tile coding: Automating parameter choice. In: Zucker, J.-D., Saitta, L. (eds.) SARA 2005. LNCS (LNAI), vol. 3607, pp. 194–205. Springer, Heidelberg (2005)
Sutton, R., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press (1998)
Tomlin, C., Pappas, G.J., Sastry, S.: Conflict resolution for air traffic management: A study in multiagent hybrid systems. IEEE Transactions on Automatic Control 43(4), 509–521 (1998)
Wolpert, D.H., Wheeler, K., Tumer, K.: Collective intelligence for control of distributed dynamical systems. Europhysics Letters 49(6) (2000)
Wooldridge, M.: An Introduction to MultiAgent Systems. John Wiley and Sons (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Yliniemi, L., Tumer, K. (2014). Multi-objective Multiagent Credit Assignment Through Difference Rewards in Reinforcement Learning. In: Dick, G., et al. Simulated Evolution and Learning. SEAL 2014. Lecture Notes in Computer Science, vol 8886. Springer, Cham. https://doi.org/10.1007/978-3-319-13563-2_35
Download citation
DOI: https://doi.org/10.1007/978-3-319-13563-2_35
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-13562-5
Online ISBN: 978-3-319-13563-2
eBook Packages: Computer ScienceComputer Science (R0)