Advertisement

EffectFace: A Fast and Efficient Deep Neural Network Model for Face Recognition

  • Weicheng Li
  • Dan Jia
  • Jia Zhai
  • Jihong Cai
  • Han Zhang
  • Lianyi Zhang
  • Hailong Yang
  • Depei Qian
  • Rui Wang
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 908)

Abstract

Despite the Deep Neural Network (DNN) has achieved a great success in image recognition, the resource needed by DNN applications is still too much in terms of both memory usage and computing time, which makes it barely possible to deploy a whole DNN system on resource-limited devices such as smartphones and small embedded systems. In this paper, we present a DNN model named EffectFace designed for higher storage and computation efficiency without compromising the accuracy.

EffectFace includes two sub-modules, EffectDet for face detection and EffectApp for face recognition. In EffectDet we use sparse and small-scale convolution cores (filters) to reduce the number of weights for less memory usage. In EffectApp, we use pruning and weights-sharing technology to further reduce weights. At the output stage of the network, we use a new loss function rather than the traditional Softmax function to acquire feature vectors of the input face images, which reduces the dimension of the output of the network from n to fixed 128 where n equals to the number of categories to classify. Experiments show that, compared with previous models, the amounts of weights of our EffectFace is dramatically decreased (less than 10% of previous models) without losing the accuracy of recognition.

Keywords

Deep learning Efficient neural network Face recognition 

References

  1. 1.
    Li, H., Lin, Z., Shen, X., et al.: A convolutional neural network cascade for face detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5325–5334 (2015)Google Scholar
  2. 2.
    Farfade, S.S., Saberian, M.J., Li, L.J.: Multi-view face detection using deep convolutional neural networks. In: Proceedings of the 5th ACM on International Conference on Multimedia Retrieval, pp. 643–650R. ACM (2015)Google Scholar
  3. 3.
    Szegedy, C., Liu, W., Jia, Y., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)Google Scholar
  4. 4.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  5. 5.
    Mignon, A.: PCCA: a new approach for distance learning from sparse pairwise constraints. In: Computer Vision and Pattern Recognition, pp. 2666–2672. IEEE (2012)Google Scholar
  6. 6.
    Chopra, S., Hadsell, R., LeCun, Y.: Learning a similarity metric discriminatively, with application to face verification. In: Proceedings CVPR (2005)Google Scholar
  7. 7.
    Guillaumin, M., Verbeek, J., Schmid, C.: Is that you? Metric learning approaches for face identification. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 498–505. IEEE (2009)Google Scholar
  8. 8.
    Hu, J., Lu, J., Tan, Y.P.: Discriminative deep metric learning for face verification in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1875–1882 (2014)Google Scholar
  9. 9.
    Huang, C., Zhu, S., Yu, K.: Large-scale strongly supervised ensemble metric learning: U.S. Patent 8,873,844[P], 28 October 2014Google Scholar
  10. 10.
    Schroff, F., Kalenichenko, D., Philbin, J.: Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823 (2015)Google Scholar
  11. 11.
    Sun, Y., Wang, X., Tang, X.: Deeply learned face representations are sparse, selective, and robust. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2892–2900 (2015)Google Scholar
  12. 12.
    Taigman, Y., Yang, M., Ranzato, M.A., et al.: DeepFace: closing the gap to human-level performance in face verification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1708 (2014)Google Scholar
  13. 13.
    Schroff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering, 815–823 (2015)Google Scholar
  14. 14.
    Lin, M., Chen, Q., Yan, S.: Network in network. arXiv preprint arXiv:1312.4400 (2013)
  15. 15.
    Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. Fiber 56(4), 3–7 (2016)Google Scholar
  16. 16.
    Han, S., Pool, J., Tran, J., Dally, W.J.: Learning both weights and connections for efficient neural networks. In: Advances in Neural Information Processing Systems (2015)Google Scholar
  17. 17.
    Deng, J., Berg, A., Satheesh, S., et al.: Large scale visual recognition challenge (2012). 1. www.image-net.org/challenges/LSVRC/2012
  18. 18.
    Girshick, R., Donahue, J., Darrell, T., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Computer Vision and Pattern Recognition, pp. 580–587. IEEE (2013)Google Scholar
  19. 19.
    Zhang, K., Zhang, Z., Li, Z., et al.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett. 23(10), 1499–1503 (2016)CrossRefGoogle Scholar
  20. 20.
    LeCun, Y., Boser, B., Denker, J.S., et al.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)CrossRefGoogle Scholar
  21. 21.
    LeCun, Y., Denker, J.S., Solla, S.A., et al.: Optimal brain damage. NIPs 2, 598–605 (1989)Google Scholar
  22. 22.
    Hassibi, B., Stork, D.G.: Second order derivatives for network pruning: optimal brain surgeon. In: Advances in Neural Information Processing Systems, p. 164 (1993)Google Scholar
  23. 23.
    Han, S., Pool, J., Tran, J., et al.: Learning both weights and connections for efficient neural network. In: Advances in Neural Information Processing Systems, pp. 1135–1143 (2015)Google Scholar
  24. 24.
    Weinberger, K., Dasgupta, A., Langford, J., et al.: Feature hashing for large scale multitask learning. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1113–1120. ACM (2009)Google Scholar
  25. 25.
    Shi, Q., Petterson, J., Dror, G., et al.: Hash kernels for structured data. J. Mach. Learn. Res. 10(Nov), 2615–2637 (2009)Google Scholar
  26. 26.
    Ganchev, K., Dredze, M.: Small statistical models by random feature mixing. In: Proceedings of the ACL08 HLT Workshop on Mobile Language Processing, pp. 19–20 (2008)Google Scholar
  27. 27.
    Vaillant, R., Monrocq, C., Cun, Y.L.: Original approach for the localization of objects in images. IEE Proc. Vis. Image Sig. Process. 141(4), 245–250 (1994)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Weicheng Li
    • 1
  • Dan Jia
    • 1
  • Jia Zhai
    • 3
    • 4
  • Jihong Cai
    • 2
  • Han Zhang
    • 2
  • Lianyi Zhang
    • 2
  • Hailong Yang
    • 1
  • Depei Qian
    • 1
  • Rui Wang
    • 1
  1. 1.School of Computer Science and EngineeringBeihang UniversityBeijingChina
  2. 2.Science and Technology on Special System Simulation LaboratoryBeijing Simulation CenterBeijingChina
  3. 3.Communication University of ChinaBeijingChina
  4. 4.Science and Technology on Electromagnetic Scattering LaboratoryBeijingChina

Personalised recommendations