Face Symmetry Analysis Using a Unified Multi-task CNN for Medical Applications

  • Gary StoreyEmail author
  • Richard JiangEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 869)


Facial symmetry analysis can provide an important role in the diagnosis and rehabilitation of medical conditions like facial paralysis issues such as bell’s palsy. Recent advances in computer vision techniques specifically the use of deep convolutional neural networks and multi-task learning provide a gateway to fast and state-of-the-art accurate methods for object detection tasks. In this paper, we present a novel unified multi-task CNN framework for simultaneous object proposal, face detection and face symmetry analysis. We highlight the potential possibilities for the use of such a framework within the medical domain through the experimental results on two test data sets. The results are promising showing high level of accuracy for both the task of face detection and symmetry analysis while also highlighting the efficient computational overhead for our proposed method which can process an image in 0.04 s.


Computer vision Face recognition Face analysis Medical diagnosis 



The authors would like to thank the financial support from the EPSRC grant (EP/P009727/1).


  1. 1.
    Grammer,K., Thornhill, R.: Human (Homo sapiens) facial attractiveness and sexual selection: the role of symmetry and averageness. J. Comp. Psychol. (Washington, D.C. : 1983) 108(3), 233–242 (1994)CrossRefGoogle Scholar
  2. 2.
    Ishii, L.E.: Facial Nerve Rehabilitation. Facial Plast. Surg. Clin. North Ame. 24(4), 573–575 (2016). [Online]. Available: and
  3. 3.
    Guerreschi, P., Gabert, P.-E., Labbé, D., Martinot-Duquennoy, V.: Paralysie faciale chez lenfant. Annales de Chirurgie Plastique Esthétique 61(5), 513–518 (2016). [Online]. Available and Scholar
  4. 4.
    Monini, S., Buffoni, A., Romeo, M., Di Traglia, M., Filippi, C., Atturo, F., Barbara, M.: Kabat rehabilitation for Bells palsy in the elderly. Acta Oto-Laryngologica, 1–5 (2016). [Online]. Available: and
  5. 5.
    Lindsay, R.W., Robinson, M., Hadlock, T.A.: Comprehensive facial rehabilitation improves function in people with facial paralysis: a 5-year experience at the Massachusetts eye and ear infirmary. Phys. Ther 90(3), 391–397 (2010). [Online]. Available: and
  6. 6.
    Banks, C.A., Bhama, P.K., Park, J., Hadlock, C.R., Hadlock, T.A.: Clinician-graded electronic facial paralysis assessment. Plast. Reconstr. Surg. 136(2), 223e–230e (2015). [Online]. Available:
  7. 7.
    Wang, T., Dong, J., Sun, X., Zhang, S., Wang, S.: Automatic recognition of facial movement for paralyzed face. Biomed. Mater. Eng. 24(6), 2751–2760 (2014)Google Scholar
  8. 8.
    Wang, T., Zhang, S., Dong, J., Liu, L., Yu, H.: Automatic evaluation of the degree of facial nerve paralysis. Multimedia Tools Appl. 75(19), 11893–11908 (2016)CrossRefGoogle Scholar
  9. 9.
    Storey, G., Jiang, R., Bouridane, A.: Role for 2D image generated 3D face models in the rehabilitation of facial palsy. Healthc. Technol. Lett. 4(4), 145–148 (2017). [Online]. Available:
  10. 10.
    Yang, S., Luo, P., Loy, C.C., Tang, X.: From facial parts responses to face detection: a deep learning approach. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 11–18-Dece, no. 3, pp. 3676–3684 (2016)Google Scholar
  11. 11.
    Ranjan, R., Patel, V.M., Chellappa, R.: A deep pyramid deformable part model for face detection. In: 2015 IEEE 7th International Conference on Biometrics Theory, p. 2015. BTAS, Applications and Systems (2015)Google Scholar
  12. 12.
    Ramanan, D.: Face detection, pose estimation, and landmark localization in the wild, pp. 2879–2886 (2012). [Online]. Available:
  13. 13.
    Ranjan, R., Patel, V.M., Chellappa, R.: HyperFace: A Deep Multi-task Learning Framework for Face Detection, Landmark Localization, Pose Estimation, and Gender Recognition (2016). [Online]. Available:
  14. 14.
    Yang, S., Luo, P., Loy, C.C., Tang, X.: WIDER FACE: a face detection benchmark. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-Dec, pp. 5525–5533 (2016)Google Scholar
  15. 15.
    Köstinger, M., Wohlhart, P., Roth, P.M., Bischof, H.: Annotated facial landmarks in the wild: a large-scale, real-world database for facial landmark localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2144–2151 (2011)Google Scholar
  16. 16.
    Uijlings, J.R., Van De Sande, K.E., Gevers, T., Smeulders, A.W.: Selective search for object recognition. Int. J. Comput. Vis. 104(2), 154–171 (2013)CrossRefGoogle Scholar
  17. 17.
    Carreira, J., Sminchisescu, C.: CPMC: automatic object segmentation using constrained parametric min-cuts. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1312–1328 (2012)CrossRefGoogle Scholar
  18. 18.
    Zitnick, C.L., Doll, P.: Edge boxes: locating object proposals from edges. In: European Conference on Computer Vision, pp. 1–15 (2014)Google Scholar
  19. 19.
    Erhan, D., Szegedy, C., Toshev, A., Anguelov, D.: Scalable object detection using deep neural networks. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2155–2162 (2014). [Online]. Available:
  20. 20.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks (2015). [Online]. Available:
  21. 21.
    Jones, M., Viola, P.: Fast multi-view face detection. Mitsubishi Electric Research Lab TR2000396, July 2003. [Online]. Available:
  22. 22.
    Jonathon Phillips, P., Moon, H., Rizvi, S.A., Rauss, P.J.: The FERET evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1090–1104 (2000)CrossRefGoogle Scholar
  23. 23.
    Heisele, B., Serre, T., Poggio, T.: A component-based framework for face detection and identification. In. J. Comput. Vis. 74(2), 167–181 (2007)CrossRefGoogle Scholar
  24. 24.
    Viola, P., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vis. 57(2), 137–154 (2004). [Online]. Available:
  25. 25.
    Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2009). [Online]. Available: and
  26. 26.
    Farfade, S.S., Saberian, M., Li, L.J.: Multi-view face detection using deep convolutional neural networks. In: International Conference on Multimedia Retrieval 2015 (ICMR), p. 19 (2015). [Online]. Available:
  27. 27.
    Yang, M.H., Kriegman, D.J., Ahuja, N.: Detecting faces in images: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 24(1), 34–58 (2002)CrossRefGoogle Scholar
  28. 28.
    Zhang, C., Zhang, Z.: A survey of recent advances in face detection. Microsoft Research, p. 17 (2010). [Online]. Available:
  29. 29.
    Zafeiriou, S., Zhang, C., Zhang, Z.: A survey on face detection in the wild: past, present and future. Comput. Vis. Image Underst. 138, 1–24 (2015)CrossRefGoogle Scholar
  30. 30.
    Zhang, Z., Luo, P., Chen, C.L., Tang, X.: Facial landmark detection by deep multi-task learning. In: European Conference on Computer Vision, pp. 94–108 (2014)Google Scholar
  31. 31.
    Gkioxari, G., Hariharan, B., Girshick, R., Malik, J.: R-CNNs for Pose Estimation and Action Detection. arXiv preprint arXiv:1406.5212 (2014). [Online]. Available:
  32. 32.
    Wang,K., Luo, J.: Detecting visually observable disease symptoms from faces. EURASIP J. Bioinf. Syst. Biol. 2016(1), 13 (2016). [Online]. Available:
  33. 33.
    Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The Extended Cohn-Kanade Dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, pp. 94–101. IEEE (2010). [Online]. Available:

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer and Information SciencesNorthumbria UniversityNewcastle upon TyneUK

Personalised recommendations