Concept Formation Using Incremental Gaussian Mixture Models

  • Paulo Martins Engel
  • Milton Roberto Heinen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6419)

Abstract

This paper presents a new algorithm for incremental concept formation based on a Bayesian framework. The algorithm, called IGMM (for Incremental Gaussian Mixture Model), uses a probabilistic approach for modeling the environment, and so, it can rely on solid arguments to handle this issue. IGMM creates and continually adjusts a probabilistic model consistent to all sequentially presented data without storing or revisiting previous training data. IGMM is particularly useful for incremental clustering of data streams, as encountered in the domain of moving object trajectories and mobile robotics. It creates an incremental knowledge model of the domain consisting of primitive concepts involving all observed variables. Experiments with simulated data streams of sonar readings of a mobile robot shows that IGMM can efficiently segment trajectories detecting higher order concepts like “wall at right” and “curve at left”.

Keywords

Concept Formation Incremental Learning Unsupervised Learning Bayesian Methods EM Algorithm Finite Mixtures Clustering 

References

  1. 1.
    Arandjelovic, O., Cipolla, R.: Incremental learning of temporally-coherent Gaussian mixture models. In: Proc. 16th British Machine Vision Conf. (BMVC), Oxford, UK, pp. 759–768 (September 2005)Google Scholar
  2. 2.
    Kristan, M., Skocaj, D., Leonardis, A.: Incremental learning with Gaussian mixture models. In: Proc. Computer Vision Winter Workshop, Moravske Toplice, Slovenia, pp. 25–32 (2008)Google Scholar
  3. 3.
    Fisher, D.H.: Knowledge acquisition via incremental conceptual learning. Machine Learning 2, 139–172 (1987)Google Scholar
  4. 4.
    Gennari, J.H., Langley, P., Fisher, D.: Models of incremental concept formation. Artificial Intelligence 40, 11–61 (1989)CrossRefGoogle Scholar
  5. 5.
    Engel, P.M., Heinen, M.R.: Incremental learning of multivariate Gaussian mixture models. In: Proc. 20th Brazilian Symposium on AI (SBIA), São Bernardo do Campo, SP, Brazil. Springer, Heidelberg (October 2010)Google Scholar
  6. 6.
    Heinen, M.R., Engel, P.M.: An incremental probabilistic neural network for regression and reinforcement learning tasks. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds.) ICANN 2010, Part II. LNCS, vol. 6353, pp. 170–179. Springer, Heidelberg (2010)Google Scholar
  7. 7.
    Burfoot, D., Lungarella, M., Kuniyoshi, Y.: Toward a theory of embodied statistical learning. In: Asada, M., Hallam, J.C.T., Meyer, J.-A., Tani, J. (eds.) SAB 2008. LNCS (LNAI), vol. 5040, pp. 270–279. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  8. 8.
    Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics. In: Intelligent Robotics and Autonomous Agents. MIT Press, Cambridge (2006)Google Scholar
  9. 9.
    Nolfi, S., Tani, J.: Extracting regularities in space and time through a cascade of prediction networks: The case of a mobile robot navigating in a structured environment. Connection Science 11(2), 125–148 (1999)CrossRefGoogle Scholar
  10. 10.
    Haykin, S.: Neural Networks and Learning Machines, 3rd edn. Prentice-Hall, Upper Saddle River (2008)Google Scholar
  11. 11.
    Linåker, F., Niklasson, L.: Time series segmentation using an adaptive resource allocating vector quantization network based on change detection. In: Proc. IEEE-INNS-ENNS Int. Joint Conf. Neural Networks (IJCNN 2000), Los Alamitos, CA, USA, pp. 323–328 (2000)Google Scholar
  12. 12.
    Linåker, F., Niklasson, L.: Sensory flow segmentation using a resource allocating vector quantizer. In: Amin, A., Pudil, P., Ferri, F., Iñesta, J.M. (eds.) SPR 2000 and SSPR 2000. LNCS, vol. 1876, pp. 853–862. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  13. 13.
    Fukunaga, K.: Introduction to Statistical Pattern Recognition, 2nd edn. Academic Press, London (1990)MATHGoogle Scholar
  14. 14.
    Bishop, C.: Neural Networks for Pattern Recognition. Oxford Univ. Press, New York (1995)MATHGoogle Scholar
  15. 15.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society 39(1), 1–38 (1977)MathSciNetMATHGoogle Scholar
  16. 16.
    Tan, P.N., Steinbach, M., Kumar, V.: Introduction to Data Mining. Addison-Wesley, Boston (2006)Google Scholar
  17. 17.
    Titterington, D.M.: Recursive parameter estimation using incomplete data. Journal of the Royal Statistical Society 46(2), 257–267 (1984)MathSciNetMATHGoogle Scholar
  18. 18.
    Wang, S., Zhao, Y.: Almost sure convergence of titterington’s recursive estimator for mixture models. Statistics & Probability Letters (76), 2001–2006 (2006)Google Scholar
  19. 19.
    Neal, R., Hinton, G.: A view of the EM algorithm that justifies incremental, sparse, and other variants. In: Learning in Graphical Models, pp. 355–368. Kluwer Academic Publishers, Dordrecht (1998)CrossRefGoogle Scholar
  20. 20.
    Cappé, O., Moulines, E.: Online EM algorithm for latent data models. Journal of the Royal Statistical Society (September 2008)Google Scholar
  21. 21.
    Robbins, H., Monro, S.: A stochastic approximation method. Annals of Mathematical Statistics 22, 400–407 (1951)MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Keehn, D.G.: A note on learning for Gaussian proprieties. IEEE Trans. Information Theory 11, 126–132 (1965)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Paulo Martins Engel
    • 1
  • Milton Roberto Heinen
    • 1
  1. 1.UFRGS – Informatics InstitutePorto AlegreBrazil

Personalised recommendations