Skip to main content

Part of the book series: Advanced Information and Knowledge Processing ((AI&KP))

Given a set of examples of a concept, the learning problem can be described as finding a general rule that explains examples given only a sample of limited size. Examples are generally referred as data. The difficulty of the learning problem is similar to the problem of children learning to speak from the sounds emitted by the grown-up people. The learning problem can be stated as follows: given an example sample of limited size, to find a concise data description. Learning methods can be grouped in three big families: supervised learning, reinforcement learning and unsupervised learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. Baraldi and P. Blonda. A survey of fuzzy clustering algorithms for pattern recognition. IEEE Transactions on System, Man and Cybernetics-B, 29(6):778-801,1999.

    Google Scholar 

  2. J. C. Bedzek. Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, 1981.

    Google Scholar 

  3. C. M. Bishop. Neural Networks for Pattern Recognition. Cambridge University Press, 1995.

    Google Scholar 

  4. C. M. Bishop, M. Svensen, and C. K. I. Williams. GTM: the generative topo- graphic mapping. Neural Compuation, 10(1):215-234, 1998.

    Article  Google Scholar 

  5. F. Camastra. Data dimensionality estimation methods: A survey. Pattern Recog- nition, 36(12):215-234, 2003.

    Google Scholar 

  6. F.L. Chung and T. Lee. Fuzzy competitive learning. Neural Networks, 7(3):539-551,1994.

    Article  Google Scholar 

  7. P. Demartines and J. Herault. Curvilinear component analysis: A self-organizing neural network for nonlinear mapping in cluster analysis. IEEE Transactions on Neural Networks, 8(1):148-154, 1997.

    Article  Google Scholar 

  8. A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incom-plete data via the em algorithm. Journal Royal Statistical Society, 39(1):1-38, 1977.

    MATH  MathSciNet  Google Scholar 

  9. R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. John Wiley, 2001.

    Google Scholar 

  10. E. Erwin, K. Obermayer, and K. Schulten. Self-organizing maps:ordering, con- vergence properties and energy functions. Biological Cybernetics, 67(1):47-55, 1992.

    Article  MATH  Google Scholar 

  11. R. A. Fisher. The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7(2):179-188, 1936.

    Google Scholar 

  12. E. Forgy. Cluster analysis of multivariate data; efficiency vs. interpretability of classifications. Biometrics, 21(1):768, 1965.

    Google Scholar 

  13. B. Fritzke. Growing cell structures- a self organizing network for unsupervised and supervised learning. Neural Networks, 7(9):1441-1460, 1994.

    Article  Google Scholar 

  14. B. Fritzke. A growing neural gas learns topologies. In Advances in Neural Information Processing Systems 7, pages 625-632. MIT Press, 1995.

    Google Scholar 

  15. R. Gray. Vector quantization. IEEE Transactions on Acoustics, Speech and Signal Processing Magazine, 1(2):4-29, 1984.

    Google Scholar 

  16. R. M. Gray. Vector Quantization and Signal Compression. Kluwer, 1992.

    Google Scholar 

  17. P. J. Huber. Robust Statistics. John Wiley, 1981.

    Google Scholar 

  18. A. K. Jain, M. N. Murty, and P. J. Flynn. Data clustering: A review. ACM Comput. Surveys, 31(3):264-323, 1999.

    Article  Google Scholar 

  19. I. T. Jolliffe. Principal Component Analysis. Springer-Verlag, 1986.

    Google Scholar 

  20. T. Kohonen. Self-organized formation of topologically correct feature maps. Biological Cybernetics, 43(1):59-69, 1982.

    Article  MATH  MathSciNet  Google Scholar 

  21. T. Kohonen. Self-Organizing Map. Springer-Verlag, 1997.

    Google Scholar 

  22. T. Kohonen, J. Hynninen, J. Kangas, and J. Laaksonen. Som-pak: The self-organizing map program package. Technical report, Laboratory of Computer and Information Science, Helsinki University of Technology, 1996.

    Google Scholar 

  23. Y. Linde, A. Buzo, and R. Gray. Least square quantization in pcm. IEEE Transaction on Information Theory, 28(2):129-137, 1982.

    Article  Google Scholar 

  24. S. P. Lloyd. An algorithm for vector quantizer design. IEEE Transaction on Communications, 28(1):84-95, 1982.

    Google Scholar 

  25. J. Mac Queen. Some methods for classifications and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical statistics and probability, pages 281-297. University of California Press, 1967.

    Google Scholar 

  26. J. Makhoul, S. Roucos, and H. Gish. Vector Quantization in speech coding. Proceedings of IEEE, 73(11):1551-1588, 1985.

    Article  Google Scholar 

  27. T. E. Martinetz and K. J. Schulten. A “neural gas” network learns topologies. In Artificial Neural Networks, pages 397-402. North-Holland, 1991.

    Google Scholar 

  28. T. E. Martinetz and K. J. Schulten. Neural-gas network for vector quantiza-tion and its application to time-series prediction. IEEE Transaction on Neural Networks, 4(4):558-569, 1993.

    Article  Google Scholar 

  29. T. E. Martinetz and K. J. Schulten. Topology representing networks. Neural Networks, 7(3):507-522, 1994.

    Article  Google Scholar 

  30. S. M. Omohundro. The delaunay triangulation and function learning. Technical report, International Computer Science Institute, 1990.

    Google Scholar 

  31. N. R. Pal, K. Pal, and J. C. Bedzek. A mixed c-means clustering model. In Proceedings of IEEE International Conference on Fuzzy Systems, pages 11-21. IEEE Press, 1997.

    Google Scholar 

  32. F. P. Preparata and M. I. Shamos. Computational geometry. Springer-Verlag, 1990.

    Google Scholar 

  33. R. Redner and H. Walker. Mixture densities, maximum likelihood and the em algorithm. SIAM Review, 26(2), 1984.

    Google Scholar 

  34. H. J. Ritter, T. M. Martinetz, and K. J. Schulten. Neuronale Netze. Addison- Wesley, 1991.

    Google Scholar 

  35. D. J. Willshaw and C. von der Malsburg. How patterned neural connections can be set up by self-organization. Proceedings of the Royal Society London, B194(1117):431-445, 1976.

    Article  Google Scholar 

  36. W. H. Wolberg and O. Mangasarian. Multisurface method of pattern separation for medical diagnosis applied to breast cytology. Proceedings of the National Academy of Sciences, U.S.A., 87(1):9193-9196, 1990.

    Article  MATH  Google Scholar 

  37. C. F. J. Wu. On the convergence properties of the em algorithm. The Annals of Statistics, 11(1):95-103, 1983.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer

About this chapter

Cite this chapter

(2008). Clustering Methods. In: Machine Learning for Audio, Image and Video Analysis. Advanced Information and Knowledge Processing. Springer, London. https://doi.org/10.1007/978-1-84800-007-0_6

Download citation

  • DOI: https://doi.org/10.1007/978-1-84800-007-0_6

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84800-006-3

  • Online ISBN: 978-1-84800-007-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics