Abstract
Multiagent systems offer a new paradigm where learning techniques can be useful. We focus on the application of lazy learning to multiagent systems where each agents learns individually and also learns when to cooperate in order to improve its performance. We show some experiments in which CBR agents use an adapted version of LID (Lazy Induction of Descriptions), a CBR method for classification. We discuss a collaboration policy (called Bounded Counsel) among agents that improves the agents’ performance with respect to their isolated performance. Later, we use decision tree induction and discretization techniques to learn how to tune the Bounded Counsel policy to a specific multiagent system—preserving always the individual autonomy of agents and the privacy of their case-bases. Empirical results concerning accuracy, cost, and robustness with respect to number of agents and case base size are presented. Moreover, comparisons with the Committee collaboration policy (where all agents collaborate always) are also presented.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
J. L. Arcos and R. López de Mántaras. Perspectives: a declarative bias mechanism for case retrieval. In David Leake and Enric Plaza, editors, Case-Based Reasoning. Research and Development, number 1266 in Lecture Notes in Artificial Intelligence, pages 279–290. Springer-Verlag, 1997.
E. Armengol and E. Plaza. Bottom-up induction of feature terms. Machine Learning Journal, 41(1): 259–294, 2000.
E. Armengol and E. Plaza. Lazy induction of descriptions for relational case-based learning. In ECML-2001, Lecture Notes in Artificial Intelligence, 2001.
Philip K. Chan and Salvatore J. Stolfo. A comparative evaluation of voting and meta-learning on partitioned data. In Proc. 12th International Conference on Machine Learning, pages 90–98. Morgan Kaufmann, 1995.
L. K. Hansen and P. Salamon. Neural networks ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(10):993–1001, 1990.
Ramon López de Mántaras. A distance-based attribute selection measure for decision tree induction. Machine Learning, 6:81–92, 1991.
M. P. Perrone and L. N. Cooper. When networks disagree: Ensemble methods for hydrid neural networks. In Artificial Neural Networks for Speech and Vision. Chapman-Hall, 1993.
E. Plaza, J. L. Arcos, P. Noriega, and C. Sierra. Competing agents in agent-mediated institutions. Journal of Personal Technologies, 2:212–220, 1998.
E. Plaza, R. López de Mántaras, and E. Armengol. On the importance of similitude: An entropy-based assessment. In I. Smith and B. Saltings, editors, Advances in Case-Based Reasoning, number 1168 in Lecture Notes in Artificial Intelligence, pages 324–338. Springer-Verlag, 1996.
Enric Plaza, Josep Lluís Arcos, and Francisco Martín. Cooperative case-based reasoning. In Gerhard Weiss, editor, Distributed Artificial Intelligence Meets Machine Learning. Learning in Multi-Agent Environments, number 1221 in Lecture Notes in Artificial Intelligence, pages 180–201. Springer-Verlag, 1997.
F. J. Provost and D. Hennessy. Scaling up: Distributed machine learning with cooperation. In Proc. 13th AAAI Conference, 1996.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ontañón, S., Plaza, E. (2001). Learning When to Collaborate among Learning Agents. In: De Raedt, L., Flach, P. (eds) Machine Learning: ECML 2001. ECML 2001. Lecture Notes in Computer Science(), vol 2167. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44795-4_34
Download citation
DOI: https://doi.org/10.1007/3-540-44795-4_34
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42536-6
Online ISBN: 978-3-540-44795-5
eBook Packages: Springer Book Archive