Advertisement

Health and Technology

, Volume 9, Issue 5, pp 715–724 | Cite as

Identification and analysis of photometric points on 2D facial images: a machine learning approach in orthodontics

  • Gururajaprasad Kaggal Lakshmana RaoEmail author
  • Arvind Channarayapatna Srinivasa
  • Yulita Hanum P. Iskandar
  • Norehan Mokhtar
Original Paper
  • 44 Downloads

Abstract

The lack of an effective and automated facial landmark identification tool has prompted us to design and develop a smart machine learning approach. The study aims to address two objectives. The primary objective is to assess the effectiveness and accuracy of algorithmic methodology in identifying and analysing facial landmarks on two dimensional (2D) facial images and the secondary objective is to understand the clinical application of automation in facial landmark identification. The study has utilised 418 facial landmark points and 220 landmark measures from 22 2D facial images of volunteers. The study has used a deep learning algorithm ‘You Only Look Once (YOLO)’ to determine the accuracy of the developed system and its clinical applications. The system identified 418 landmarks in total with facial recognition being 100%. Of the total 220 landmark measures, the system provided 48 (21.81%) measures in the error range of 0 to 1 mm, 75 (34.09%) measures in the error range of 2 to 3 mm, 92 (41.81%) measures in the error range of 4 to 5 mm followed by 5 (2.2%) measures in the range of 6 mm. The smart and innovative approach provides valuable training and a helpful tool for the students performing the clinical facial analysis. The automated system with its effective and efficient algorithm delivers fast and reliable landmark identification and analysis.

Keywords

Orthodontic photometric points Orthodontic facial measures Frontal facial photography YOLO Machine learning algorithm Deep learning Orthodontics Smart learning 

Notes

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

No ethical approval was required as the study only utilised images of volunteers faces. Suitable written consent was taken from each of the volunteers before the start of the study.

Informed consent

Informed written consent was obtained from all individual participants included in the study.

References

  1. 1.
    Morosini I, Peron A, Correia K, Moresca R. Study of face pleasantness using facial analysis in standardized frontal photographs. Dental Press J Orthod. 2018;17(5):24–34.CrossRefGoogle Scholar
  2. 2.
    Posnick JC, Farkas LG. The application of anthropometric surface measurements in craniomaxillofacial surgery. In: Anthropometry of the head and face. New York: Raven Press; 1994. p. 125–38.Google Scholar
  3. 3.
    Peron A, Morosini I, Correia K, Moresca, Petrelli E. Photometric study of divine proportion and its correlation with facial attractiveness. Dental Press J Orthod. 2012;17(2):124–31.CrossRefGoogle Scholar
  4. 4.
    Redmon J, Divvala S, Girshick R, Farhadi A. YOLO: you only look once: unified, real-time object detection. Proc IEEE Conf Comput Vis Pattern Recognit. 2016:779–88.Google Scholar
  5. 5.
    Asi SM, et al. Automatic craniofacial anthropometry landmarks detection and measurements for the orbital region. Procedia Computer Science. 2014;42:372–7.CrossRefGoogle Scholar
  6. 6.
    Campomanes-Álvarez BR, et al. Dispersion assessment in the location of facial landmarks on photographs. Int J Legal Med. 2015;129(1):227–36.MathSciNetCrossRefGoogle Scholar
  7. 7.
    Sagonas C, Antonakos E, Tzimiropoulos G, Zafeiriou S, Pantic M. 300 faces In-the-wild challenge: database and results. Image and Vision Computing (IMAVIS), Special Issue on Facial Landmark Localisation "In-The-Wild". 2016.CrossRefGoogle Scholar
  8. 8.
    Sagonas C, Tzimiropoulos G, Zafeiriou S, Pantic M. 300 Faces in-the-Wild Challenge: The first facial landmark localization Challenge. Proceedings of IEEE Int’l Conf. on Computer Vision (ICCV-W), 300 Faces in-the-Wild Challenge (300-W). Sydney, Australia; 2013.Google Scholar
  9. 9.
    Sagonas C, Tzimiropoulos G, Zafeiriou S, Pantic M. A semi-automatic methodology for facial landmark annotation. Proceedings of IEEE Int’l Conf. Computer Vision and Pattern Recognition (CVPR-W), 5th Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2013). Oregon, USA; 2013.Google Scholar
  10. 10.
    Liu Z, Luo P, Wang X, Tang X. Deep Learning Face Attributes in the Wild. ICCV '15 Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), p. 3730–3738.Google Scholar
  11. 11.
    Hui J. mAP (mean Average Precision) for Object Detection: A Medium Corporation; 2018 [Available from: https://medium.com/@jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173. Accessed 19 Feb 2019
  12. 12.
    Iqtait M, Mohamad FS, Mamat M. Feature extraction for face recognition via Active Shape Model (ASM) and Active Appearance Model (AAM). IORA-ICOR 2017, IOP Conf. Series: Materials Science and Engineering. 2018;332:012032.Google Scholar

Copyright information

© IUPESM and Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Craniofacial and Biomaterial Science Cluster, Advanced Medical and Dental InstituteUniversiti Sains MalaysiaPenangMalaysia
  2. 2.Cognitive Computing and Data Science Research Lab, Global Technology OfficeCognizant Technology SolutionsBengaluruIndia
  3. 3.Graduate School of BusinessUniversiti Sains MalaysiaPenangMalaysia

Personalised recommendations