Abstract
In this paper, we propose a new approach for unsupervised learning using autoencoders with drop strategy (DrAE). Different from Explicit Regularized Autoencoders (ERAE), DrAE has no any additionally explicit regularization term to the cost function. A serial of drop strategies are exploited in the training phase of autoencoders for robust feature representation, such as dropout, dropConnect, denoising, winner-take-all, local winner-take-all. When training DrAE, subset of units or weights are set to zero. The results of our experiments on the MNIST dataset show that the performance of DrAE is better or comparative to ERAE.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Bengio, Y.: Learning deep architectures for AI. Found. Trends® Mach. Learn. 2(1), 1–127 (2009)
Bengio, Y., Goodfellow, I.J., Courville, A.: Deep Learning. MIT Press, ‎Cambridge (2015). http://www.iro.umontreal.ca/∼bengioy/dlbook
Bourlard, H., Kamp, Y.: Auto-association by multilayer perceptrons and singular value decomposition. Biol. Cybern. 59(4–5), 291–294 (1988)
Hinton, G.E., Zemel, R.S.: Autoencoders, minimum description length, and Helmholtz free energy. Adv. Neural Inf. Process. Syst. 3 (1994)
Ng, A.: Sparse autoencoder. CS294A Lect. Notes 72, 1–19 (2011)
Olshausen, B.A., Field, D.J.: Sparse coding with an overcomplete basis set: A strategy employed by V1? Vis. Res. 37(23), 3311–3325 (1997)
Rifai, S., Vincent, P., Muller, X., et al.: Contractive auto-encoders: explicit invariance during feature extraction. In: Proceedings of the 28th International Conference on Machine Learning (ICML-2011), pp. 833–840 (2011)
Srivastava, N., Hinton, G., Krizhevsky, A., et al.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Wan, L., Zeiler, M., Zhang, S., et al.: Regularization of neural networks using DropConnect. In: Proceedings of the 30th International Conference on Machine Learning (ICML-2013), pp. 1058–1066 (2013)
Vincent, P., Larochelle, H., Bengio, Y., et al.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine learning, pp. 1096–1103. ACM (2008)
Makhzani, A., Frey, B.J.: Winner-take-all autoencoders. In: Advances in Neural Information Processing Systems, pp. 2773–2781 (2015)
Srivastava, R.K., Masci, J., Kazerounian, S., et al.: Compete to compute. In: Advances in Neural Information Processing Systems, pp. 2310–2318 (2013)
LeCun, Y., Cortes, C., Burges, C.J.C.: The MNIST database of handwritten digits (1998)
Acknowledgments
This work was supported by National Natural Science Foundation of China under Grant No.61373055, Industry Project of Provincial Department of Education of Jiangsu Province (Grant No. JH10-28), and Industry Oriented Project of Jiangsu Provincial Department of Technology (Grant No. BY2012059).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Hu, C., Wu, XJ. (2016). Autoencoders with Drop Strategy. In: Liu, CL., Hussain, A., Luo, B., Tan, K., Zeng, Y., Zhang, Z. (eds) Advances in Brain Inspired Cognitive Systems. BICS 2016. Lecture Notes in Computer Science(), vol 10023. Springer, Cham. https://doi.org/10.1007/978-3-319-49685-6_8
Download citation
DOI: https://doi.org/10.1007/978-3-319-49685-6_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-49684-9
Online ISBN: 978-3-319-49685-6
eBook Packages: Computer ScienceComputer Science (R0)