Age and Gender Recognition on Imbalanced Dataset of Face Images with Deep Learning

  • Dmitry YudinEmail author
  • Maksim Shchendrygin
  • Alexandr Dolzhenko
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1156)


The paper describes usage of deep neural networks based on ResNet and Xception architectures for recognition of age and gender of imbalanced dataset of face images. Described dataset collection process from open sources. Training sample contains more than 210000 images. Testing sample have more 1700 special selected face images with different ages and genders. Training data has imbalanced number of images per class. Accuracy for gender classification and mean absolute error for age estimation are used to analyze results quality. Age recognition is described as classification task with 101 classes. Gender recognition is solved as classification task with two categories. Paper contains analysis of different approaches to data balancing and their influence to recognition results. The computing experiment was carried out on a graphics processor using NVidia CUDA technology. The average recognition time per image is estimated for different deep neural networks. Obtained results can be used in software for public space monitoring, collection of visiting statistics etc.


Age recognition Gender recognition Classification Face image Imbalanced dataset Deep neural network 



The research was made possible by Government of the Russian Federation (Agreement № 075-02-2019-967).


  1. 1.
    Levi, G., Hassner, T.: Age and gender classification using convolutional neural networks. In: IEEE Workshop on Analysis and Modeling of Faces and Gestures (AMFG), at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston (2015)Google Scholar
  2. 2.
    Eidinger, E., Enbar, R., Hassner, T.: Age and gender estimation of unfiltered faces. In: Transactions on Information Forensics and Security (IEEE-TIFS), Special Issue on Facial Biometrics in the Wild, vol. 9, no. 12, pp. 2170–2179 (2014)Google Scholar
  3. 3.
    Escalera, S., Fabian, J., Pardo, P., Baro, X., Gonzalez, J., Escalante, H. J., Guyon, I.: Chalearn 2015 apparent age and cultural event recognition: datasets and results. In: ICCV, ChaLearn Looking at People workshop (2015)Google Scholar
  4. 4.
    Agustsson E., Timofte R., Escalera S., Baro X., Guyon I., Rothe R.: Apparent and real age estimation in still images with deep residual regressors on APPA-REAL database. In: Proceedings of FG (2017)Google Scholar
  5. 5.
    Rothe, R., Timofte, R., Gool, L.V.: DEX: Deep EXpectation of apparent age from a single image. In: Proceedings of ICCV (2015)Google Scholar
  6. 6.
    Clapes, A., Bilici, O., Temirova, D., Avots, E., Anbarjafari, G., Escalera, S.: From apparent to real age: gender, age, ethnic, makeup, and expression bias analysis in real age estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2373–2382 (2018)Google Scholar
  7. 7.
    Rothe, R., Timofte, R., Gool, L.V.: Deep expectation of real and apparent age from a single image without facial landmarks. In: IJCV (2016)Google Scholar
  8. 8.
    Zhang, Z., Song, Y., Qi, H.: Age progression/regression by conditional adversarial autoencoder. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)Google Scholar
  9. 9.
    Panis, G., Lanitis, A., Tsapatsoulis, N., Cootes, T.F.: Overview of research on facial ageing using the FG-net ageing database. IET Biometrics 5(2), 37–46 (2016)CrossRefGoogle Scholar
  10. 10.
    Chen, B.-C., Chen, C.-S., Hsu, W.H.: Face recognition using cross-age reference coding with cross-age celebrity dataset. IEEE Trans. Multimedia 17, 804–815 (2015)CrossRefGoogle Scholar
  11. 11.
    Ricanek, K., Tesafaye, T.: MORPH: a longitudinal image database of normal adult age-progression. In: 7th International Conference on Automatic Face and Gesture Recognition (FGR06) (2006)Google Scholar
  12. 12.
  13. 13.
    Jiang, B.: Age and gender estimation based on Convolutional Neural Network and TensorFlow. Accessed 25 May 2019
  14. 14.
    Tommola, J., Ghazi, P., Adhikari, B., Huttunen, H.: Real time system for facial analysis. In: EUVIP 2018 (2018)Google Scholar
  15. 15.
    Becerra-Riera, F., Morales-González, A., Vazquez, H. M.: Exploring local deep representations for facial gender classification in videos. In: Conference: International Workshop on Artificial Intelligence and Pattern Recognition (IWAIPR) (2018)Google Scholar
  16. 16.
    Kharchevnikova, A.S., Savchenko, A.V.: Neural networks in video-based age and gender recognition on mobile platforms. Opt. Memory Neural Netw. 27(4), 246–259 (2018)CrossRefGoogle Scholar
  17. 17.
    Seif, G.: Handling imbalanced datasets in deep learning (2018). Accessed 25 May 2019
  18. 18.
    Lin, T.-Y., Goyal, P., Girshick, R., He, K., Doll’ar P.: Focal loss for dense object detection. arXiv:1708.02002v2 (2018)
  19. 19.
    Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. arXiv:1710.09412 (2017)
  20. 20.
    Zhong, Z., Zheng, L., Kang, G., Li,, S., Yang, Y.: Random erasing data augmentation. arXiv:1708.04896 (2017)
  21. 21.
    Augmentor library. Accessed 25 May 2019
  22. 22.
    Imgaug library. Accessed 25 May 2019
  23. 23.
    Wang, X., Wang, K., Lian, S.: A survey on face data augmentation. In: CVPR. arXiv:1904.11685v1 (2019)
  24. 24.
    Guan, S.: TL-GAN: transparent latent-space GAN (2018). Accessed 25 May 2019
  25. 25.
    Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. In: ICLR 2018. arXiv:1710.10196v3 (2018)
  26. 26.
    Yan, X., Yang, J., Sohn, K., Lee, H.: Attribute2Image: conditional image generation from visual attributes. arXiv:1512.00570v2 (2016)
  27. 27.
    Kaiming, H., Xiangyu, Z., Shaoqing, R., Jian S.: Deep residual learning for image recognition. In: ECCV. arXiv:1512.03385 (2015)
  28. 28.
    Yudin, D., Kapustina, E.: Deep learning in vehicle pose recognition on two-dimensional images. In: Advances in Intelligent Systems and Computing, vol. 874, pp. 434–443 (2019)Google Scholar
  29. 29.
    Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: CVPR 2017. arXiv:1610.02357 (2017)
  30. 30.
    Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna Z.: Rethinking the inception architecture for computer vision. In: ECCV. arXiv:1512.00567 (2016)
  31. 31.
    Chollet, F.: Keras: deep learning library for Theano and tensorflow. Accessed 26 May 2019

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Moscow Institute of Physics and Technology (National Research University)MoscowRussia
  2. 2.Belgorod State Technological University named after V.G. ShukhovBelgorodRussia

Personalised recommendations