Skip to main content

Multi-agent Learning by Distributed Feature Extraction

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4865))

Abstract

Finding the right data representation is essential for virtually every machine learning task. We discuss an extension of this representation problem. In the collaborative representation problem, the aim is to find for each learning agent in a multi-agent system an optimal data representation, such that the overall performance of the system is optimized, while not assuming that all agents learn the same underlying concept. Also, we analyze the problem of keeping the common terminology in which agents express their hypothesis as compact and comprehensible as possible by forcing them to use the same features, where this is possible. We analyze the complexity of this problem and show under which conditions an optimal solution can be found. We then propose a simple heuristic algorithm and show that this algorithm can efficiently be implemented in a multi-agent system. The approach is exemplified on the problem of collaborative media organization and evaluated on a several synthetic and real world datasets.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Guyon, I., Gunn, S., Nikravesh, M., Zadeh, L.A.: Feature Extraction: Foundations and Applications (Studies in Fuzziness and Soft Computing). Springer-Verlag, New York (2006)

    MATH  Google Scholar 

  2. Mierswa, I., Morik, K.: Automatic feature extraction for classifying audio data. Machine Learning Journal 58, 127–149 (2005)

    Article  MATH  Google Scholar 

  3. Schlkopf, B., Smola, A.J.: Learning with Kernels — Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2001)

    Google Scholar 

  4. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer series in statistics. Springer, Heidelberg (2001)

    MATH  Google Scholar 

  5. Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artificial Intelligence 97, 273–324 (1997)

    Article  MATH  Google Scholar 

  6. Nunes, L., Oliveira, E.: Learning from multiple sources. In: Proc. of the International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 1106–1113 (2004)

    Google Scholar 

  7. Kapetanakis, S., Kudenko, D.: Reinforcement learning of coordination in heterogeneous cooperative multi-agent systems. In: Adaptive Agents and Multi-Agent Systems, pp. 119–131 (2005)

    Google Scholar 

  8. Ontañón, S., Plaza, E.: A bartering approach to improve multiagent learning. In: Proc. of the International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 386–393 (2002)

    Google Scholar 

  9. Caruana, R.: Multitask learning: A knowledge-based source of inductive bias. In: International Conference on Machine Learning, pp. 41–48 (1993)

    Google Scholar 

  10. Caruana, R.: Multitask learning. Machine Learning 28, 41–75 (1997)

    Article  Google Scholar 

  11. Evgeniou, T., Micchelli, C.A., Pontil, M.: Learning multiple tasks with kernel methods. Journal of Machine Learning Research 6, 615–637 (2005)

    MathSciNet  Google Scholar 

  12. Jebara, T.: Multi-task feature and kernel selection for svms. In: Proceedings of the International Conference on Machine Learning (2004)

    Google Scholar 

  13. Yu, K., Tresp, V., Schwaighofer, A.: Learning gaussian processes from multiple tasks. In: Proceedings of the International Conference on Machine Learning (2005)

    Google Scholar 

  14. Yu, S., Tresp, V., Yu, K.: Robust multi-task learning with t-processes. In: Proceedings of the International Conference on Machine Learning (2007)

    Google Scholar 

  15. Argyriou, A., Evgeniou, T., Pontil, M.: Multi-task feature learning. In: Advances in Neural Information Processing Systems (2007)

    Google Scholar 

  16. John, G., Kohavi, R., Pfleger, K.: Irrelevant Features and the Subset Selection Problem. In: Proceedings of the International Conference on Machine Learning, pp. 121–129 (1994)

    Google Scholar 

  17. Yu, L., Liu, H.: Efficient feature selection via analysis of relevance and redundancy. Journal of Machine Learning Research 5 (2004)

    Google Scholar 

  18. Mierswa, I., Wurst, M.: Information preserving multi-objective feature selection for unsupervised learning. In: Proceedings of the International Conference on Genetic and Evolutionary Computation (2006)

    Google Scholar 

  19. Homburg, H., Mierswa, I., Möller, B., Morik, K., Wurst, M.: A benchmark dataset for audio classification and clustering. In: Proceedings of the International Conference on Music Information Retrieval (2005)

    Google Scholar 

  20. Weihs, C., Szepannek, G., Ligges, U., Luebke, K., Raabe, N.: Local models in register classification by timbre. In: Data Science and Classification (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Karl Tuyls Ann Nowe Zahia Guessoum Daniel Kudenko

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wurst, M. (2008). Multi-agent Learning by Distributed Feature Extraction. In: Tuyls, K., Nowe, A., Guessoum, Z., Kudenko, D. (eds) Adaptive Agents and Multi-Agent Systems III. Adaptation and Multi-Agent Learning. AAMAS ALAMAS ALAMAS 2005 2007 2006. Lecture Notes in Computer Science(), vol 4865. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-77949-0_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-77949-0_17

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-77947-6

  • Online ISBN: 978-3-540-77949-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics