Advertisement

Assessing Deep Learning Architectures for Visualizing Maya Hieroglyphs

  • Edgar Roman-RangelEmail author
  • Stephane Marchand-Maillet
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10267)

Abstract

This work extends the use of the non-parametric dimensionality reduction method t-SNE [11] to unseen data. Specifically, we use retrieval experiments to assess quantitatively the performance of several existing methods that enable out-of-sample t-SNE. We also propose the use of deep learning to construct a multilayer network that approximates the t-SNE mapping function, such that once trained, it can be applied to unseen data. We conducted experiments on a set of images showing Maya hieroglyphs. This dataset is specially challenging as it contains multi-label weakly annotated instances. Our results show that deep learning is suitable for this task in comparison with previous methods.

Keywords

t-SNE Deep learning Visualization 

Notes

Acknowledgments

This work was supported by the Swiss-NSF MAAYA project (SNSF-144238).

References

  1. 1.
    Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)CrossRefGoogle Scholar
  2. 2.
    Bunte, K., Biehl, M., Hammer, B.: A general framework for dimensionality-reducing data visualization mapping. Neural Comput. 24(3), 771–804 (2012)CrossRefzbMATHGoogle Scholar
  3. 3.
    Gatica-Perez, D., Gayol, C.P., Marchand-Maillet, S., Odobez, J.-M., Roman-Rangel, E., Krempel, G., Grube, N.: The MAAYA project: multimedia analysis and access for documentation and decipherment of maya epigraphy. In: Workshop DH (2014)Google Scholar
  4. 4.
    Gisbrecht, A., Lueks, W., Mokbel, B., Hammer, B.: Out-of-sample kernel extensions for nonparametric dimensionality reduction. In: Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (2012)Google Scholar
  5. 5.
    Gisbrecht, A., Schulz, A., Hammer, B.: Parametric nonlinear dimensionality reduction using kernel t-SNE. Neurocomputing 147, 71–82 (2015)CrossRefGoogle Scholar
  6. 6.
    Hinton, G., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Kim, H., Choo, J., Reddy, C.K., Park, H.: Doubly supervised embedding based on class labels and intrinsic clusters for high-dimensional data visualization. Neurocomputing 150, 570–582 (2015)CrossRefGoogle Scholar
  8. 8.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)Google Scholar
  9. 9.
    Thompson, J.: A Catalog of Maya Hieroglyphs. University of Oklahoma, Norman (1962)Google Scholar
  10. 10.
    van der Maaten, L.: Learning a parametric embedding by preserving local structure. In: Proceedings of the International Conference on Artificial Intelligence and Statistics (2009)Google Scholar
  11. 11.
    Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)zbMATHGoogle Scholar
  12. 12.
    Wang, W., Huang, Y., Wang, Y., Wang, L.: Generalized autoencoder: a neural network framework for dimensionality reduction. In: Proceedings of IEEE CVPR (2014)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of GenevaGenevaSwitzerland

Personalised recommendations