Skip to main content

A ConvNet-Based Approach Applied to the Gesticulation Control of a Social Robot

  • Conference paper
  • First Online:
Book cover Advances in Emerging Trends and Technologies (ICAETT 2019)

Abstract

This document presents the enforcement of a facial gesture recognition system through applying a Convolutional Neural Network (CNN) algorithm for gesticulation of an interactive social robot with humanoid appearance, which was designed in order to accomplish the thematic proposed. Furthermore, it is incorporated into it a hearing communication system for Human-Robot interaction throughout the use of visemes, by coordinating the robots mouth movement with the processed audio of the text converted to the robot’s voice (text to speech). The precision achieved by the CNN incorporated in the social-interactive robot is 61%, while the synchronization system between the robot’s mouth and the robot’s audio-voice differs from 0.1 s. In this way, it is pretended to endow mechanisms social robots for a naturally interaction with people, thus facilitating the appliance of them in the fields of childrens teaching-learning, medical therapies and as entertainment means.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Barnes, J., FakhrHosseini, M., Jeon, M., Park, C.-H., Howard, A.: The influence of robot design on acceptance of social robots. In: 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), pp. 51–55. IEEE, Jeju (2017)

    Google Scholar 

  2. Mead, R., Mataric, M.J.: Autonomous human-robot proxemics: a robot-centered approach. In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pp. 573–573. IEEE Press (2016)

    Google Scholar 

  3. Benavides, J.: Diseo y construccin de un robot interactivo para el tratamiento de personas con el trastorno del espectro autista (TEA). Universidad de las Fuerzas Armadas (ESPE) (2016)

    Google Scholar 

  4. Sojib, N., Islam, S., Rupok, M.H., Hasan, S., Amin, M.R., Iqbal, M.Z.: Design and development of the social humanoid robot named Ribo. In: 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC), pp. 314–317. IEEE, Dhaka (2017)

    Google Scholar 

  5. Lapusan, C., Rad, C.-R., Besoiu, S., Plesa, A.: Design of a humanoid robot head for studying human-robot interaction. In: 2015 7th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), pp. WR-15–WR-18. IEEE, Bucharest (2015)

    Google Scholar 

  6. Chen, L., Zhou, M., Su, W., Wu, M., She, J., Hirota, K.: Softmax regression based deep sparse autoencoder network for facial emotion recognition in human-robot interaction. Inf. Sci. 428, 49–61 (2018)

    Article  MathSciNet  Google Scholar 

  7. Faria, D.R., Vieira, M., Faria, F.C.C., Premebida, C.: Affective facial expressions recognition for human-robot interaction. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 805–810. IEEE, Lisbon (2017)

    Google Scholar 

  8. Chen, J., Chen, Z., Chi, Z., Fu, H.: Facial expression recognition based on facial components detection and hog features. In: International Workshops on Electrical and Computer Engineering Subfields, Istanbul, Turkey, pp. 884–888 (2014)

    Google Scholar 

  9. Soni, L.N., Datar, A., Datar, S.: Implementation of Viola-Jones Algorithm based approach for human face detection. Int. J. Curr. Eng. Technol. 7, 1819–1823 (2017)

    Google Scholar 

  10. Fernndez, R., Montes, H. (eds.): RoboCity16 Open Conference on Future Trends in Robotics. Consejo Superior de Investigaciones Cientificas, Madrid (2016)

    Google Scholar 

  11. Cheng, H., Ji, G.: Design and implementation of a low cost 3D printed humanoid robotic platform. In: 2016 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), pp. 86–91. IEEE, Chengdu (2016)

    Google Scholar 

  12. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436 (2015)

    Article  Google Scholar 

  13. Kumar, P., Happy, S.L., Routray, A.: A real-time robust facial expression recognition system using HOG features. In: 2016 International Conference on Computing, Analytics and Security Trends (CAST), pp. 289–293. IEEE, Pune (2016)

    Google Scholar 

  14. Meghdari, A., Shouraki, S.B., Siamy, A., Shariati, A.: The real-time facial imitation by a social humanoid robot. In: 2016 4th International Conference on Robotics and Mechatronics (ICROM), pp. 524–529. IEEE, Tehran (2016)

    Google Scholar 

  15. Fernandez, M.C.D., Gob, K.J.E., Leonidas, A.R.M., Ravara, R.J.J., Bandala, A.A., Dadios, E.P.: Simultaneous face detection and recognition using Viola-Jones Algorithm and Artificial Neural Networks for identity verification. In: 2014 IEEE Region 10 Symposium, pp. 672–676. IEEE, Kuala Lumpur (2014)

    Google Scholar 

  16. Wang, Y.-Q.: An analysis of the Viola-Jones face detection algorithm. Image Process. On Line 4, 128–148 (2014)

    Article  Google Scholar 

  17. Sang, D.V., Van Dat, N., Thuan, D.P.: Facial expression recognition using deep convolutional neural networks. In: 2017 9th International Conference on Knowledge and Systems Engineering (KSE), pp. 130–135. IEEE, Hue (2017)

    Google Scholar 

  18. Ashwin, T.S., Jose, J., Raghu, G., Reddy, G.R.M.: An E-learning system with multifacial emotion recognition using supervised machine learning. In: 2015 IEEE Seventh International Conference on Technology for Education (T4E), pp. 23–26. IEEE, Warangal (2015)

    Google Scholar 

  19. Vu, T.H., Nguyen, L., Guo, T., Monga, V.: Deep network for simultaneous decomposition and classification in UWB-SAR imagery. In: 2018 IEEE Radar Conference (RadarConf 2018), pp. 0553–0558. IEEE, Oklahoma City (2018)

    Google Scholar 

  20. Khan, S., Rahmani, H., Shah, S.A.A., Bennamoun, M.: A Guide to Convolutional Neural Networks for Computer Vision. Synthesis Lectures on Computer Vision, vol. 8, pp. 1–207 (2018)

    Article  Google Scholar 

Download references

Acknowledgment

This work was financed in part by Universidad Tecnica de Ambato (UTA) and their Research and Development Department (DIDE) under project 1919-CU-P-2017.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Patricio Encalada .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Arias, E., Encalada, P., Tigre, F., Granizo, C., Gordon, C., Garcia, M.V. (2020). A ConvNet-Based Approach Applied to the Gesticulation Control of a Social Robot. In: Botto-Tobar, M., León-Acurio, J., Díaz Cadena, A., Montiel Díaz, P. (eds) Advances in Emerging Trends and Technologies. ICAETT 2019. Advances in Intelligent Systems and Computing, vol 1066. Springer, Cham. https://doi.org/10.1007/978-3-030-32022-5_18

Download citation

Publish with us

Policies and ethics