Abstract
Multi-layer models of sparse coding (deep dictionary learning) and dimensionality reduction (PCANet) have shown promise as unsupervised learning models for image classification tasks. However, the pure implementations of these models have limited generalisation capabilities and high computational cost. This work introduces the Deep Hebbian Network (DHN), which combines the advantages of sparse coding, dimensionality reduction, and convolutional neural networks for learning features from images. Unlike in other deep neural networks, in this model, both the learning rules and neural architectures are derived from cost-function minimizations. Moreover, the DHN model can be trained online due to its Hebbian components. Different configurations of the DHN have been tested on scene and image classification tasks. Experiments show that the DHN model can automatically discover highly discriminative features directly from image pixels without using any data augmentation or semi-labeling.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bahroun, Y., Soltoggio, A.: Online representation learning with single and multi-layer Hebbian networks for image classification tasks. In: Lintas, A., Rovetta, S., Verschure, P.F.M.J., Villa, A.E.P. (eds.) ICANN 2017, Part I. LNCS, vol. 10613, pp. 354–363. Springer International Publishing, Cham (2017)
Baldi, P., Sadowski, P.: A theory of local learning, the learning channel, and the optimality of backpropagation. Neural Netw. 83, 51–74 (2016)
Bo, L., Ren, X., Fox, D.: Hierarchical matching pursuit for image classification: architecture and fast algorithms. In: NIPS, vol. 1, p. 6 (2011)
Bo, L., Ren, X., Fox, D.: Multipath sparse coding using hierarchical matching pursuit. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 660–667 (2013)
Chan, T.H., Jia, K., Gao, S., Lu, J., Zeng, Z., Ma, Y.: PCANet: a simple deep learning baseline for image classification? IEEE Trans. Image Process. 24(12), 5017–5032 (2015)
Coates, A., Ng, A.Y.: Selecting receptive fields in deep networks. In: Advances in Neural Information Processing Systems, pp. 2528–2536 (2011)
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)
Cox, T.F., Cox, M.A.: Multidimensional Scaling. CRC Press, Boca Raton (2000)
Hosoya, H., Hyvärinen, A.: Learning visual spatial pooling by strong PCA dimension reduction. Neural Comput. 28, 1249–1264 (2016)
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)
Lin, T.H., Kung, H.: Stable and efficient representation learning with nonnegativity constraints. In: Proceedings of the 31st International Conference on Machine Learning, ICML 2014, pp. 1323–1331 (2014)
Olshausen, B.A., Field, D.J.: Sparse coding with an overcomplete basis set: a strategy employed by V1? Vision. Res. 37(23), 3311–3325 (1997)
Pandey, M., Lazebnik, S.: Scene recognition and weakly supervised object localization with deformable part-based models. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 1307–1314. IEEE (2011)
Parizi, S.N., Oberlin, J.G., Felzenszwalb, P.F.: Reconfigurable models for scene recognition. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2775–2782. IEEE (2012)
Pehlevan, C., Chklovskii, D.: A normative theory of adaptive dimensionality reduction in neural networks. In: Advances in Neural Information Processing Systems, pp. 2269–2277 (2015)
Pehlevan, C., Chklovskii, D.B.: A Hebbian/anti-Hebbian network derived from online non-negative matrix factorization can cluster and discover sparse features. In: 2014 48th Asilomar Conference on Signals, Systems and Computers, pp. 769–775. IEEE (2014)
Poikonen, J.H., Laiho, M.: Online linear subspace learning in an analog array computing architecture. In: Proceedings of the 16th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA) (2016)
Quattoni, A., Torralba, A.: Recognizing indoor scenes. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 413–420. IEEE (2009)
Sohn, K., Lee, H.: Learning invariant representations with local transformations. In: Proceedings of the 29th International Conference on Machine Learning, pp. 1311–1318 (2012)
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
Zhang, S., Wang, J., Tao, X., Gong, Y., Zheng, N.: Constructing deep sparse coding network for image classification. Pattern Recogn. 64, 130–140 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Bahroun, Y., Hunsicker, E., Soltoggio, A. (2017). Building Efficient Deep Hebbian Networks for Image Classification Tasks. In: Lintas, A., Rovetta, S., Verschure, P., Villa, A. (eds) Artificial Neural Networks and Machine Learning – ICANN 2017. ICANN 2017. Lecture Notes in Computer Science(), vol 10613. Springer, Cham. https://doi.org/10.1007/978-3-319-68600-4_42
Download citation
DOI: https://doi.org/10.1007/978-3-319-68600-4_42
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-68599-1
Online ISBN: 978-3-319-68600-4
eBook Packages: Computer ScienceComputer Science (R0)