Skip to main content

Mixture of Random Prototype-Based Local Experts

  • Conference paper
Hybrid Artificial Intelligence Systems (HAIS 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6076))

Included in the following conference series:

Abstract

The Mixture of Experts (ME) is one of the most popular ensemble methods used in pattern recognition and machine learning. This algorithm stochastically partitions the input space of the problem into a number of subspaces, experts becoming specialized on each subspace. The ME uses an expert called gating network to manage this process, which is trained together with the experts. In this paper, we propose a modified version of the ME algorithm which first partitions the original problem into centralized regions and then uses a simple distance-based gating function to specialize the expert networks. Each expert contributes to classify an input sample according to the distance between the input and a prototype embedded by the expert. As a result, an accurate classifier with shorter training time and smaller number of parameters is achieved. Experimental results on a binary toy problem and selected datasets from the UCI machine learning repository show the robustness of the proposed method compared to the standard ME model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Jacobs, R., Jordan, M., Barto, A.: Task decomposition through competition in a modular connectionist architecture: the what and where vision tasks. Tech rep. University of Massachusetts, Amherst, MA (1991)

    Google Scholar 

  2. Jacobs, R., Jordan, M., Nowlan, S., Hinton, G.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)

    Article  Google Scholar 

  3. Waterhouse, S., Cook, G.: Ensemble methods for phoneme classification. In: Mozer, M., Jordan, J., Petsche, T. (eds.) Advances in Neural Information Processing Systems, vol. 9, pp. 800–806. The MIT Press, Cambridge (1997)

    Google Scholar 

  4. Avnimelech, R., Intrator, N.: Boosted mixture of experts: an ensemble learning scheme. Neural Comput. 11(2), 483–497 (1999)

    Article  Google Scholar 

  5. Tang, B., Heywood, M., Shepherd, M.: Input partitioning to mixture of experts. International Joint Conference on Neural Networks, 227–232 (2002)

    Google Scholar 

  6. Wan, E., Bone, D.: Interpolating earth-science data using RBF networks and mixtures of experts. In: NIPS, pp. 988–994 (1996)

    Google Scholar 

  7. Ebrahimpour, R., Kabir, E., Yousefi, M.R.: Teacher-directed learning in view-independent face recognition with mixture of experts using overlapping eigenspaces. Computer Vision and Image Understanding 111, 195–206 (2008)

    Article  MATH  Google Scholar 

  8. Puuronen, S., Tsymbal, A., Terziyan, V.: Distance functions in dynamic integration of data mining techniques. In: Proceedings of SPIE Data mining and knowledge discovery: theory, tools and technology II, vol. 4057, pp. 22–32. SPIE, Bellingham (2000)

    Chapter  Google Scholar 

  9. Tsymbal, A., Puuronen, S.: Bagging and boosting with dynamic integration of classifiers. In: Zighed, D.A., Komorowski, J., Żytkow, J.M. (eds.) PKDD 2000. LNCS (LNAI), vol. 1910, pp. 116–125. Springer, Heidelberg (2000)

    Chapter  Google Scholar 

  10. Jordan, M.I., Jacobs, R.A.: Hierarchical mixtures of experts and the EM algorithm. Neural Comp. 6, 181–214 (1994)

    Article  Google Scholar 

  11. Murphy, P.M., Aha, D.W.: UCI Repository of Machine Learning Databases, Dept. of Information and Computer Science, Univ. of California, Irvine (1994)

    Google Scholar 

  12. Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Prentice-Hall, Englewood Cliffs (1999)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Armano, G., Hatami, N. (2010). Mixture of Random Prototype-Based Local Experts. In: Graña Romay, M., Corchado, E., Garcia Sebastian, M.T. (eds) Hybrid Artificial Intelligence Systems. HAIS 2010. Lecture Notes in Computer Science(), vol 6076. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13769-3_67

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-13769-3_67

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-13768-6

  • Online ISBN: 978-3-642-13769-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics