Advertisement

Visualizing Multidimensional Data through Multilayer Perceptron Maps

  • Antonio Neme
  • Antonio Nido
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6593)

Abstract

Visualization of high-dimensional data is a major task in data mining. The main idea of visualization is to map data from the high-dimensional space onto a certain position in a low-dimensional space. From all mappings, only those that lead to maps that are good approximations of the data distribution observed in the high-dimensional space are of interest. Here, we present a mapping scheme based on multilayer perceptrons that forms a two-dimensional representation of high-dimensional data. The core idea is that the system maps all vectors to a certain position in the two-dimensional space. We then measure how much does this map resemble the distribution in the original high-dimensional space, which leads to an error measure. Based on this error, we apply reinforcement learning to multilayer perceptrons to find good maps. We present here the description of the model as well as some results in well-known benchmarks. We conclude that the multilayer perceptron is a good tool to visualize high-dimensional data.

Keywords

data visualization reinforcement learning multilayer perceptrons 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Wang, et al.: Data mining in bioinformatics. Springer, Heidelberg (2005)zbMATHGoogle Scholar
  2. 2.
    Torkkola, K.: Feature extraction by non-parametric mutual information maximization. Journal of Machine Learning Research 3, 1415–1438 (2003)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Jollife, I.: Principal Component Analysis, 2nd edn. Springer, HeidelbergGoogle Scholar
  4. 4.
    Kohonen, T.: Self-Organizing maps, 3rd edn. Springer, Heidelberg (2000)zbMATHGoogle Scholar
  5. 5.
    Hujun, Y.: The self-organizing maps: Background, theories, extensions and applications. In: Computational Intelligence: A Compendium, pp. 715–762 (2008)Google Scholar
  6. 6.
    Kaski, S., Sinkkonen, J.: Principle of learning metrics for exploratory data analysis. The Journal of VLSI Signal Processing 37(2-3), 177–188 (2004)CrossRefGoogle Scholar
  7. 7.
    Tenenbaum, J., da Silva, V., Langford, J.: A global geometric framework for nonlinear dimensionality reduction Science 290, 2319–2323 (2000)Google Scholar
  8. 8.
    Venna, K., Kaski, S.: Local multidimensional scaling. Neural Networks 19, 889–899 (2006)CrossRefzbMATHGoogle Scholar
  9. 9.
    Venna, J., Peltonen, J., Nybo, K., Aidos, H., Kaski, S.: Information retrieval perspective to nonlinear dimensionality reduction for data visualization. Journal of Machine Learning Research 11, 451–490 (2010)MathSciNetzbMATHGoogle Scholar
  10. 10.
    Deco, G., Schúrman, B.: Information dynamics, foundations and applications. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  11. 11.
    Rojas, R.: Neural networks, a systematic introduction. Springer, Heidelberg (1996)zbMATHGoogle Scholar
  12. 12.
    Chellapilla, K., Fogel, D.: Evolving an Expert Checkers Playing Program without Using Human Expertise. IEEE Tr. on Evol. Comp. 5(4), 422–428 (2001)CrossRefGoogle Scholar
  13. 13.
    Mitchell, M.: An introduction to genetic algorithms. The MIT press, Cambridge (1998)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Antonio Neme
    • 1
    • 2
  • Antonio Nido
    • 2
  1. 1.Adaptive Informatics Research CentreAalto UniversityHelsinkiFinland
  2. 2.Complex Systems GroupUniversidad Autonoma de la Ciudad de MexicoMexico

Personalised recommendations