Abstract
Autoencoders that acquire specific feature space models from unsupervised data have become an important technique for designing systems based on neural networks. In this paper, we focuses on the reusability of sparse encoder for handwritten characters. In existing studies, the training bias of sparse autoencoders is generally more constrained in the aspect of the number of the activated intermediate units than the other autoencoders. We investigate the role that trained units play as another direction of training bias for more reusable autoencoder. As a basis of the investigation, we manually selected three autoencoders and compare the reusability of them in two experiments. One is a letter identification experiment for a character whose character faded or blurred and the structure of the original character collapsed. The other is the experiment to distinguish the lines that form letters from that the line segments in subparts other than the text part constituting the sentences such as figures and tables. As a result, we found that the role that the intermediate units of the most reusable autoencoder in our experiments plays is regarded as binary functions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.-A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2014)
Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)
Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. arXiv:1412.1897 (2014)
Goodfellow, I.J., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). Book in preparation
Ng, A.: Sparse autoencoder. CS294A Lecture notes, Stanford University, p. 72 (2011)
Tsuboi, Y., Unno, Y., Suzuki, J.: Natural Language Processing by Deep Learning. Kodansha, New York (2017)
Makhzani, A., Frey, B.: k-Sparse autoencoders. arXiv:1312.5663 (2013)
Makhzani, A., Frey, B.J.: Winner-take-all autoencoders. In: Advances in Neural Information Processing Systems, pp. 2773–2781 (2015)
Okatani, T.: Deep Learning. Kodansha, New York (2015)
Electrotechnical Laboratory, Japanese Technical Committee for Optical Character Recognition, ETL Character Database (1973–1984)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of AISTATS, vol. 9, pp. 249–256 (2010)
Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.-A.: Extracting and composing robust features with denoising autoencoders. In: ICML (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Okada, T., Takeuchi, K. (2018). Comparing Sparse Autoencoders for Acquisition of More Robust Bases in Handwritten Characters. In: Huynh, VN., Inuiguchi, M., Tran, D., Denoeux, T. (eds) Integrated Uncertainty in Knowledge Modelling and Decision Making. IUKM 2018. Lecture Notes in Computer Science(), vol 10758. Springer, Cham. https://doi.org/10.1007/978-3-319-75429-1_12
Download citation
DOI: https://doi.org/10.1007/978-3-319-75429-1_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-75428-4
Online ISBN: 978-3-319-75429-1
eBook Packages: Computer ScienceComputer Science (R0)