Adaptive cognitive robot using dynamic perception with fast deep-learning and adaptive on-line predictive control

  • Liz RinconEmail author
  • Enrique CoronadoEmail author
  • Christopher LawEmail author
  • Gentiane VentureEmail author
Conference paper
Part of the Mechanisms and Machine Science book series (Mechan. Machine Science, volume 73)


This paper presents a novel adaptive cognitive robot control architecture able to adapt the robot actions and motions to the dynamics of both environment and human involving an “expressive states” in a cognitive model that adapts directly the robot optimal control. We developed an integrated system that performs dynamic perception with fast deep-learning algorithms, cognition models based on affects, and adaptive generalized predictive controllers (AGPC). The adaptation works with the perceptive states, which is transformed in cognitive data to use as the main requirement in the control design. The perception level detects and tracks to react to the environment in order to create personalized actions. The cognition is created using PAD model which defines different robot states related with the robot actions/tasks, it is created by KNN algorithm. The adaptation is commanded by an AGPC that is changed according to the cognitive states. The AGPC cost functions are calculated with the PAD values. Results showed the ability to perform robot tasks with expressive and personalized behaviours continuously.


Adaptive optimal robot control adaptive generalized predictive control cognitive models Human robot interaction 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bruhn, A., Weickert, J., Schnörr, C.: Lucas/kanade meets horn/schunck: Combining local and global optic ow methods. Int. J. of computer vision 61(3), 211–231 (2005)CrossRefGoogle Scholar
  2. 2.
    Camacho, E.F., Alba, C.B.: Model predictive control. Springer (2013)Google Scholar
  3. 3.
    Coronado, E., Villalobos, J., Bruno, B., Mastrogiovanni, F.: Gesture-based robot control: Design challenges and evaluation with humans. In: Robotics and Automation, IEEE Int. Conf. on, pp. 2761–2767. IEEE (2017)Google Scholar
  4. 4.
    Di Nuovo, A., Conti, D., Trubia, et al.: Deep learning systems for estimating visual attention in robot-assisted therapy of children with autism and intellectual disability. Robotics 7(2), 25 (2018)CrossRefGoogle Scholar
  5. 5.
    Everingham, M., Eslami, S.A., Gool, V., et al.: The pascal visual object classeschallenge: A retrospective. Int. J. of computer vision 111(1), 98–136 (2015)CrossRefGoogle Scholar
  6. 6.
    Gácsi, M., Kis, A., Faragó, et al.: Humans attribute emotions to a robot that shows simple behavioural patterns borrowed from dog behaviour. Computers in Human Behavior 59, 411–419 (2016)CrossRefGoogle Scholar
  7. 7.
    Hagane, S., Rincon, L., Katsumata, T., Bonnet, V., Fraisse, P., Venture, G.: Adaptive generalized predictive controller and cartesian force control for robot arm using dynamics and geometric identification. J. of Robotics and Mechatronics (2018)Google Scholar
  8. 8.
    Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
  9. 9.
    Khalil, W., Dombre, E.: Modeling, identification and control of robots. Butterworth-Heinemann (2004)Google Scholar
  10. 10.
    Lin, T.Y., Maire, M., Belongie, et al.: Microsoft coco: Common objects in context.In: European Conf. on computer vision, pp. 740–755. Springer (2014)Google Scholar
  11. 11.
    Liu, W., Anguelov, D., Erhan, D., Szegedy, et al.: Ssd: Single shot multibox detector. In: European Conf. on computer vision, pp. 21–37. Springer (2016)Google Scholar
  12. 12.
    Lucas, B.D., Kanade, T., et al.: An iterative image registration technique with anapplication to stereo vision (1981)Google Scholar
  13. 13.
    Mehrabian, A.: Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Current Psychology 14(4), 261–292 (1996)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. arXiv preprint (2017)Google Scholar
  15. 15.
    Rincon, L., Coronado, E., Hendra, H., Phan, J., Zainalkei, Z., Venture, G.: Expressivestates with a robot arm using adaptive fuzzy and robust predictive controllers. In: Control and Robotics Engineering, 3rd Int. Conf. on, pp. 11–15. IEEE (2018)Google Scholar
  16. 16.
    Rincon, L., Katsumata, T., Venture, G.: Robust generalized predictive controller for robot arm by optimization with youla parameters. In: The 23rd Robotics Symposia, Japan, 13th-16th March, 2018. JSME, RSJ (2018)Google Scholar
  17. 17.
    Russell, J.A., Mehrabian, A.: Evidence for a three-factor theory of emotions. J. of research in Personality 11(3), 273–294 (1977)CrossRefGoogle Scholar
  18. 18.
    Sharifi, M., Behzadipour, S., Vossoughi, G.: Model reference adaptive impedance control in cartesian coordinates for physical human–robot interaction. Advanced Robotics 28(19), 1277–1290 (2014)CrossRefGoogle Scholar
  19. 19.
    Sugiura, K., Zettsu, K.: Rospeex: A cloud robotics platform for human-robot spoken dialogues. In: Intelligent Robots and Systems, 2015 IEEE/RSJ Int. Conf. on,pp. 6155–6160. IEEE (2015)Google Scholar
  20. 20.
    Wei, L., Anguelov Dragomir, E.D., Christian, S., Scott, R.: Ssd: Single shot multibox detector arxiv preprint. arXiv preprint arXiv:1512.02325 (2015)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Mechanical Systems EngineeringTokyo University of Agriculture and TechnologyFuchuJapan

Personalised recommendations