Advertisement

Learning-Based Task Failure Prediction for Selective Dual-Arm Manipulation in Warehouse Stowing

  • Shingo KitagawaEmail author
  • Kentaro Wada
  • Kei Okada
  • Masayuki Inaba
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 867)

Abstract

Stowing is one main task of warehouse automation, and manipulation with a vacuum gripper is recently known as a practical method. However, the gripper sticks an object from upper side, which causes task failures such as drop and protrusion by even small disturbance. In this paper, we aim to realize more stable stowing task and propose a stowing system which robot selectively stow an object by two arms in case the task failures may occur. For the selective stowing, we predict task failure occurrence by convolutional neural network (CNN) and select a proper motion from the prediction results. The network predicts probabilities of task failure occurrence for both single-arm and dual-arm stowing motion cases, and we design a motion select algorithm to evaluate the two motions and select optimal one. In experiment, we implemented our system in real stowing task and achieved higher success rate 58.0% than that of single-arm stowing system 49.0% in 100 trials.

Keywords

Dual-arm manipulation Failure prediction Motion select Task-based learning Warehouse automation Stowing task 

References

  1. 1.
    Correll, N., Bekris, K.E., Berenson, D., Brock, O., Causo, A., Hauser, K., Okada, K., Rodriguez, A., Romano, J.M., Wurman, P.R.: Lessons from the Amazon Picking Challenge. CoRR abs/1601.05484 (2016)Google Scholar
  2. 2.
    Edsinger, A., Kemp, C.C.: Two arms are better than one: a behavior based control system for assistive bimanual manipulation. In: Recent Progress in Robotics: Viable Robotic Service to Human, pp. 345–355. Springer (2007)Google Scholar
  3. 3.
    Harada, K., Foissotte, T., Tsuji, T., Nagata, K., Yamanobe, N., Nakamura, A., Kawai, Y.: Pick and place planning for dual-arm manipulators. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2281–2286. IEEE (2012)Google Scholar
  4. 4.
    Hernandez, C., Bharatheesha, M., Ko, W., Gaiser, H., Tan, J., van Deurzen, K., de Vries, M., Van Mil, B., van Egmond, J., Burger, R., Morariu, M., Ju, J., Gerrmann, X., Ensing, R., van Frankenhuyzen, J., Wisse, M.: Team Delft’s Robot Winner of the Amazon Picking Challenge 2016. CoRR, abs/1610.05514 (2016)Google Scholar
  5. 5.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  6. 6.
    Levine, S., Pastor, P., Krizhevsky, A., Quillen, D.: Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. CoRR, abs/1603.02199 (2016)Google Scholar
  7. 7.
    Murooka, M., Noda, S., Nozawa, S., Kakiuchi, Y., Okada, K., Inaba, M.: Manipulation strategy decision and execution based on strategy proving operation for carrying large and heavy objects. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 3425–3432, May 2014Google Scholar
  8. 8.
    Nozawa, S., Murooka, M., Noda, S., Okada, K., Inaba, M.: Description and execution of humanoid’s object manipulation based on object-environment-robot contact states. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2608–2615, November 2013Google Scholar
  9. 9.
    Pinto, L., Gupta, A.: Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 3406–3413, May 2016Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Shingo Kitagawa
    • 1
    Email author
  • Kentaro Wada
    • 1
  • Kei Okada
    • 1
  • Masayuki Inaba
    • 1
  1. 1.The University of TokyoTokyoJapan

Personalised recommendations