Independent Feeding of People Affected with Osteoarthritis Through a Didactic Robot and Visual Control

  • Arturo Jiménez
  • Katherine Aroca
  • Vicente HalloEmail author
  • Nancy Velasco
  • Darío Mendoza
Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 152)


This chapter presents a work that will help in the consumption of the food of people with osteoarthritis. In this way, the patient will avoid discomfort in the joints of their hands. The system consists of an artificial vision algorithm and independent feeding part. The vision algorithm allows the detection of the face, localization, and tracking the person’s mouth. The independent feeding part consists of a didactic robotic arm. The robotic arm takes the food from the dish. The vision algorithm detects and tracks the face, then locates the position of the mouth. Finally, the robotic arm delivers the food to the user. The program was developed in Python using OpenCV and Dlib libraries. The face alignment method has an average of 94% of effectiveness.


Visual control Recognition and monitoring Robot feeding people Osteoarthritis 



We are grateful to “Universidad de las Fuerzas Armadas ESPE” to carry out the tireless duty of teaching.


  1. 1.
    Song, W.-K., Kim, J.: Novel assistive robot for self-feeding. En Robotic Systems-Applications, Control and Programming. InTech (2012)Google Scholar
  2. 2.
  3. 3.
    Topping, M.: Handy 1, a robotic aid to independence for severely disabled people. In: 7th International Conference on Rehabilitation Robotics, pp. 142–147 (2001)Google Scholar
  4. 4.
    Song, W.-K, et al.: Design of novel feeding robot for Korean food. In: International Conference on Smart Homes and Health Telematics. Springer, Berlin, Heidelberg, pp. 152–159 (2010)Google Scholar
  5. 5.
    Aranda, D.: Electrónica: plataformas Arduino y Raspberry Pi. Dalaga, Buenos Aires (2014)Google Scholar
  6. 6.
    Lee, S., Lee, C.: Illumination normalization and skin color validation for robust. de IS&T International Symposium on Electronic Imaging 2016, Seoul (2016)Google Scholar
  7. 7.
    Castrillón, M., Déniz, O., Hernández, D.: A comparison of face and facial feature detectors based. Mach. Vision Appl. 22, 481–494 (2011)Google Scholar
  8. 8.
    Dollár, P., Appel, R., Belongie, S., Perona, P.: Fast feature pyramids for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 36(8), 1532–1545 (2014)CrossRefGoogle Scholar
  9. 9.
    Klare, B.F., Klein, B., Taborsky, E., Blanton, A., Cheney, J., Allen, K., Grother, P., Mah, A. Jain, A.K.: Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus Benchmark A. In: de IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston (2015)Google Scholar
  10. 10.
    Liao, S., Jain, A.K., Li, A.Z.: A fast and accurate unconstrained. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 211–223 (2015)CrossRefGoogle Scholar
  11. 11.
    Cheney, J., Klein, B.: Unconstrained face detection: state of the art baseline and challenges. In: International Conference on Biometrics (ICB), Phuket (2015)Google Scholar
  12. 12.
    Kazemi, V., Sullivan, J.: One millisecond face alignment with an ensemble of regression trees. Estocolmo (2017)Google Scholar
  13. 13.
    Baki Koyuncu, A.M.G.: Software development for the kinematic. World Acad. Sci. Eng. Technol. 1(6), 1575–1580 (2007)Google Scholar
  14. 14.
    Cruz, A.B.: Cinemática inversa, de Fundamentos de robótica. McGraw-Hill, Madrid (2013)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • Arturo Jiménez
    • 1
  • Katherine Aroca
    • 1
  • Vicente Hallo
    • 1
    Email author
  • Nancy Velasco
    • 2
  • Darío Mendoza
    • 1
  1. 1.Departamento de Energía y MecánicaUniversidad de las Fuerzas Armadas ESPESangolquíEcuador
  2. 2.Departamento de Ciencias ExactasUniversidad de las Fuerzas Armadas ESPESangolquíEcuador

Personalised recommendations