Advertisement

A ConvNet-Based Approach Applied to the Gesticulation Control of a Social Robot

  • Edisson Arias
  • Patricio EncaladaEmail author
  • Franklin Tigre
  • Cesar Granizo
  • Carlos Gordon
  • Marcelo V. Garcia
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1066)

Abstract

This document presents the enforcement of a facial gesture recognition system through applying a Convolutional Neural Network (CNN) algorithm for gesticulation of an interactive social robot with humanoid appearance, which was designed in order to accomplish the thematic proposed. Furthermore, it is incorporated into it a hearing communication system for Human-Robot interaction throughout the use of visemes, by coordinating the robots mouth movement with the processed audio of the text converted to the robot’s voice (text to speech). The precision achieved by the CNN incorporated in the social-interactive robot is 61%, while the synchronization system between the robot’s mouth and the robot’s audio-voice differs from 0.1 s. In this way, it is pretended to endow mechanisms social robots for a naturally interaction with people, thus facilitating the appliance of them in the fields of childrens teaching-learning, medical therapies and as entertainment means.

Keywords

Deep Learning Human-Robot interaction Social robots Neural networks Visemes 

Notes

Acknowledgment

This work was financed in part by Universidad Tecnica de Ambato (UTA) and their Research and Development Department (DIDE) under project 1919-CU-P-2017.

References

  1. 1.
    Barnes, J., FakhrHosseini, M., Jeon, M., Park, C.-H., Howard, A.: The influence of robot design on acceptance of social robots. In: 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), pp. 51–55. IEEE, Jeju (2017)Google Scholar
  2. 2.
    Mead, R., Mataric, M.J.: Autonomous human-robot proxemics: a robot-centered approach. In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, pp. 573–573. IEEE Press (2016)Google Scholar
  3. 3.
    Benavides, J.: Diseo y construccin de un robot interactivo para el tratamiento de personas con el trastorno del espectro autista (TEA). Universidad de las Fuerzas Armadas (ESPE) (2016)Google Scholar
  4. 4.
    Sojib, N., Islam, S., Rupok, M.H., Hasan, S., Amin, M.R., Iqbal, M.Z.: Design and development of the social humanoid robot named Ribo. In: 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC), pp. 314–317. IEEE, Dhaka (2017)Google Scholar
  5. 5.
    Lapusan, C., Rad, C.-R., Besoiu, S., Plesa, A.: Design of a humanoid robot head for studying human-robot interaction. In: 2015 7th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), pp. WR-15–WR-18. IEEE, Bucharest (2015)Google Scholar
  6. 6.
    Chen, L., Zhou, M., Su, W., Wu, M., She, J., Hirota, K.: Softmax regression based deep sparse autoencoder network for facial emotion recognition in human-robot interaction. Inf. Sci. 428, 49–61 (2018)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Faria, D.R., Vieira, M., Faria, F.C.C., Premebida, C.: Affective facial expressions recognition for human-robot interaction. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 805–810. IEEE, Lisbon (2017)Google Scholar
  8. 8.
    Chen, J., Chen, Z., Chi, Z., Fu, H.: Facial expression recognition based on facial components detection and hog features. In: International Workshops on Electrical and Computer Engineering Subfields, Istanbul, Turkey, pp. 884–888 (2014)Google Scholar
  9. 9.
    Soni, L.N., Datar, A., Datar, S.: Implementation of Viola-Jones Algorithm based approach for human face detection. Int. J. Curr. Eng. Technol. 7, 1819–1823 (2017)Google Scholar
  10. 10.
    Fernndez, R., Montes, H. (eds.): RoboCity16 Open Conference on Future Trends in Robotics. Consejo Superior de Investigaciones Cientificas, Madrid (2016)Google Scholar
  11. 11.
    Cheng, H., Ji, G.: Design and implementation of a low cost 3D printed humanoid robotic platform. In: 2016 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), pp. 86–91. IEEE, Chengdu (2016)Google Scholar
  12. 12.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436 (2015)CrossRefGoogle Scholar
  13. 13.
    Kumar, P., Happy, S.L., Routray, A.: A real-time robust facial expression recognition system using HOG features. In: 2016 International Conference on Computing, Analytics and Security Trends (CAST), pp. 289–293. IEEE, Pune (2016)Google Scholar
  14. 14.
    Meghdari, A., Shouraki, S.B., Siamy, A., Shariati, A.: The real-time facial imitation by a social humanoid robot. In: 2016 4th International Conference on Robotics and Mechatronics (ICROM), pp. 524–529. IEEE, Tehran (2016)Google Scholar
  15. 15.
    Fernandez, M.C.D., Gob, K.J.E., Leonidas, A.R.M., Ravara, R.J.J., Bandala, A.A., Dadios, E.P.: Simultaneous face detection and recognition using Viola-Jones Algorithm and Artificial Neural Networks for identity verification. In: 2014 IEEE Region 10 Symposium, pp. 672–676. IEEE, Kuala Lumpur (2014)Google Scholar
  16. 16.
    Wang, Y.-Q.: An analysis of the Viola-Jones face detection algorithm. Image Process. On Line 4, 128–148 (2014)CrossRefGoogle Scholar
  17. 17.
    Sang, D.V., Van Dat, N., Thuan, D.P.: Facial expression recognition using deep convolutional neural networks. In: 2017 9th International Conference on Knowledge and Systems Engineering (KSE), pp. 130–135. IEEE, Hue (2017)Google Scholar
  18. 18.
    Ashwin, T.S., Jose, J., Raghu, G., Reddy, G.R.M.: An E-learning system with multifacial emotion recognition using supervised machine learning. In: 2015 IEEE Seventh International Conference on Technology for Education (T4E), pp. 23–26. IEEE, Warangal (2015)Google Scholar
  19. 19.
    Vu, T.H., Nguyen, L., Guo, T., Monga, V.: Deep network for simultaneous decomposition and classification in UWB-SAR imagery. In: 2018 IEEE Radar Conference (RadarConf 2018), pp. 0553–0558. IEEE, Oklahoma City (2018)Google Scholar
  20. 20.
    Khan, S., Rahmani, H., Shah, S.A.A., Bennamoun, M.: A Guide to Convolutional Neural Networks for Computer Vision. Synthesis Lectures on Computer Vision, vol. 8, pp. 1–207 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Universidad Tecnica de Ambato, UTAAmbatoEcuador
  2. 2.University of Basque Country, UPV/EHUBilbaoSpain

Personalised recommendations