Skip to main content

Adaptive and Competitive Committee Machine Architecture

  • Conference paper
  • 1383 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3610))

Abstract

Learning problem has three distinct phases, that is, model representation, learning criterion (target function) and implementation algorithm. This paper focuses on the close relation between the selection of learning criterion for committee machine and network approximation and competitive adaptation. By minimizing the KL deviation between posterior distributions, we give a general posterior modular architecture and the corresponding learning criterion form, which reflects remarkable adaptation and scalability. Besides this, we point out, from the generalized KL deviation defined on finite measure manifold in information geometry theory, that the proposed learning criterion reduces to so-called Mahalanobis deviation of which ordinary mean square error approximation is a special case, when each module is assumed Gaussian.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   119.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Pingfan, Y., Changshui, z.: Neural network and simulated evolutionary computing (in Chinese). Tsinghua university press, China (2001)

    Google Scholar 

  2. Shiwei, Y., Zhongzhi, S.: Foundations of neural network (in Chinese). Mechanics Industry press, China (2004)

    Google Scholar 

  3. Hashem, S.: Optimal Linear Combinations of Neural Networks. Unpublished PhD thesis, School of Industrial Engineering, Purdue University (1993)

    Google Scholar 

  4. Bast, W.G.: Improving the Accuracy of an Artificial Neural Network Using Multiple Different Trained Networks. Neural Computation 4(5) (1992)

    Google Scholar 

  5. Jacobs, R.A., Jordan, M.I.: Adaptive Mixtures of Local Experts. Neural Computation 3(1), 79–87 (1991)

    Article  Google Scholar 

  6. Jacobs, R.A., Jordan, M.: Hierachical Mixtures of Experts and the EM Alogorithm. Neural Computation 6, 181–214

    Google Scholar 

  7. Hansen, L.K., Salamon, P.: Neural Network Ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(10), 933–1000

    Google Scholar 

  8. Michael, P.P., Leon, N.C.: When networks disgree ensemble methods for hybrid neural networks (1993)

    Google Scholar 

  9. Waterhouse, S.R., Mackay, D.J.C.: Bayesian Methods for Mixtures of Experts. In: Touretzky et al. [226], pp. 351–357

    Google Scholar 

  10. Jacobs, R.A., Peng, F.C., Tanner, M.A.: A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures. Neural Networks 10(2), 231–241, 243–248

    Google Scholar 

  11. Poggio, T., Girosi, F.: Networks for Approximation and Learning. Proceedings of the IEEE 78(9), 1481–1497 (1990)

    Article  Google Scholar 

  12. Frayyad, U.M., Piatetsky-Shapiro, G., Smyth, P.: From data-mining to knowledge discovery: An overview. In: Fayyad, P.-S., Smyth, Uthurusamy (eds.) [52], ch. 1, pp. 1–37

    Google Scholar 

  13. Vapnik, V.N.: The Essence of Statistical Learning (in Chinese), p. 9. Tsinghua university press, China (2000) (translated by zhang xuegong)

    Google Scholar 

  14. Amari, S.: Differential-Geometrical Methods in Statistics. Lecture Notes in Statistics, vol. 28. Springer, Heidelberg (1985)

    MATH  Google Scholar 

  15. Amari, S., Nagaoka, H.: Methods of Information Geometry. AMS, vol. 191. Oxford University Press, Oxford (2000)

    MATH  Google Scholar 

  16. Zhu, H., Rohwer, R.: Information Geometric Measurements of Generalisation. Technical Report NCRG/4350 (August 1995)

    Google Scholar 

  17. Zhu, H.: Bayesian invariant measurements of generalization for continuous distributions Technical Report NCRG/4352, Aston University, ftp://cs.aston.ac

  18. jincheng, F., Changlin, M.: Data Analysis, p. 7. Science press, China (2002)

    Google Scholar 

  19. Amari, S.I.: Imformation geometry of the EM and em algorithms for neural network. Neural Networks 8(9), 1379–1408 (1995)

    Article  Google Scholar 

  20. Zhu, H., Rohwer, R.: Information geometry, Bayeasian inferance, ideal estimates and error decomposition. Working Paper SFI-98-06-45, Santa Fe Institute (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yang, J., Luo, S. (2005). Adaptive and Competitive Committee Machine Architecture. In: Wang, L., Chen, K., Ong, Y.S. (eds) Advances in Natural Computation. ICNC 2005. Lecture Notes in Computer Science, vol 3610. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11539087_38

Download citation

  • DOI: https://doi.org/10.1007/11539087_38

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28323-2

  • Online ISBN: 978-3-540-31853-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics