Advertisement

Using Convolutional Neural Networks for Assembly Activity Recognition in Robot Assisted Manual Production

  • Henning PetruckEmail author
  • Alexander Mertens
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10902)

Abstract

Due to ever-shortening product life cycles and multi variant products the demand for flexible production systems that include human-robot collaboration (HRC) rises. One key factor in HRC is stress that occurs because of the unfamiliar work with the robot. To reduce stress induced strain for assembly tasks we propose an adjustment of cycle times to the human’s performance, so that the stress that is exerted on the working person by a waiting robot is minimized. For an autonomous adaptation of the cycle time, the production system should be aware of the human’s actions and assembly progress without the need to inform the system manually. Therefore, we propose an activity recognition in assembly based on a machine learning technique. A convolutional neural network is used to distinguish between different activities during the assembly by analyzing motion data of the hands of the working person. The results show that the network is suitable for distinguishing between nine different assembly activities like screwing with a screwdriver, screwing with a hexagon wrench or general assembly and further activities.

Keywords

Human-robot collaboration Human-machine systems Manual assembly Machine learning Neural networks Convolutional neural network Pattern recognition Activity recognition Motion tracking 

Notes

Acknowledgments

The authors would like to thank the German Research Foundation DFG for the kind support within the Cluster of Excellence “Integrative Production Technology for High-Wage Countries”.

References

  1. 1.
    Freedy, A., DeVisser, E., Weltman, G., Coeyman, N.: Measurement of trust in human-robot collaboration. In: International Symposium on Collaborative Technologies and Systems, CTS 2007, pp. 106–114. IEEE (2007)Google Scholar
  2. 2.
    Maynard, H.B., Stegemerten, G.J., Schwab, J.L.: Methods-Time Measurement. McGraw-Hill, New York (1948)Google Scholar
  3. 3.
    Chryssolouris, G., Mavrikios, D., Fragos, D., Karabatsou, V.: A virtual reality-based experimentation environment for the verification of human-related factors in assembly processes. Robot. Comput.-Integr. Manuf. 16(4), 267–276 (2000)CrossRefGoogle Scholar
  4. 4.
    Zhang, W., Gen, M.: An efficient multiobjective genetic algorithm for mixed-model assembly line balancing problem considering demand ratio-based cycle time. J. Intell. Manuf. 22(3), 367–378 (2011)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Kuhlenbäumer, F., Przybysz, P., Mütze-Niewöhner, S., Schlick, C.M.: Age-differentiated analysis of the influence of task descriptions on learning sensorimotor tasks. In: Schlick, C., Trzcieliński, S. (eds.) Advances in Ergonomic Design of Systems, Products and Processes, pp. 159–175. Springer, Heidelberg (2017).  https://doi.org/10.1007/978-3-319-41697-7_43CrossRefGoogle Scholar
  6. 6.
    Krueger, G.P.: Sustained work, fatigue, sleep loss and performance: a review of the issues. Work Stress 3(2), 129–141 (1989)CrossRefGoogle Scholar
  7. 7.
    Petruck, H., Mertens, A.W.: Predicting human cycle times in robot assisted assembly. In: Trzcielinski, S. (eds.) Advances in Intelligent Systems and Computing, AHFE 2017 International Conference on Human Aspects of Advanced Manufacturing, Los Angeles, CA (USA), 17 July 2017–21 July 2017, pp. 25–36. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-60474-9_3Google Scholar
  8. 8.
    Ikeuchi, K., Suehiro, T.: Toward an assembly plan from observation, part i: task recognition with polyhedral objects. IEEE Trans. Robot. Autom. 10(3), 368–385 (1994)CrossRefGoogle Scholar
  9. 9.
    Kuniyoshi, Y., Inoue, H.: Qualitative recognition of ongoing human action sequences. IJCA I, 1600–1609 (1993)Google Scholar
  10. 10.
    Bao, L., Intille, S.S.: Activity recognition from user-annotated acceleration data. In: Ferscha, A., Mattern, F. (eds.) Pervasive 2004. LNCS, vol. 3001, pp. 1–17. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-24646-6_1CrossRefGoogle Scholar
  11. 11.
    Maurer, U., Smailagic, A., Siewiorek, D.P., Deisher, M.: Activity recognition and monitoring using multiple sensors on different body positions. In: International Workshop on Wearable and Implantable Body Sensor Networks, BSN 2006, 4 pp. IEEE (2006)Google Scholar
  12. 12.
    Kwapisz, J.R., Weiss, G.M., Moore, S.A.: Activity recognition using cell phone accelerometers. ACM SigKDD Explor. Newsl. 12(2), 74–82 (2011)CrossRefGoogle Scholar
  13. 13.
    Yang, J., Nguyen, M.N., San, P.P., Li, X., Krishnaswamy, S.: Deep convolutional neural networks on multichannel time series for human activity recognition. IJCA I, 3995–4001 (2015)Google Scholar
  14. 14.
    Zheng, Y., Liu, Q., Chen, E., Ge, Y., Zhao, J.L.: Time series classification using multi-channels deep convolutional neural networks. In: Li, F., Li, G., Hwang, S., Yao, B., Zhang, Z. (eds.) WAIM 2014. LNCS, vol. 8485, pp. 298–310. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-08010-9_33CrossRefGoogle Scholar
  15. 15.
    Zheng, Y., Liu, Q., Chen, E., Ge, Y., Zhao, J.L.: Exploiting multi-channels deep convolutional neural networks for multivariate time series classification. Front. Comput. Sci. 10(1), 96–112 (2016)CrossRefGoogle Scholar
  16. 16.
    Tapia, E.M., Intille, S.S., Larson, K.: Activity recognition in the home using simple and ubiquitous sensors. In: Ferscha, A., Mattern, F. (eds.) Pervasive 2004. LNCS, vol. 3001, pp. 158–175. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-24646-6_10CrossRefGoogle Scholar
  17. 17.
    Hecht-Nielsen, R., et al.: Theory of the backpropagation neural network. Neural Netw. 1((Suppl. 1)), 445–448 (1988)CrossRefGoogle Scholar
  18. 18.
    Hopfield, J.J.: Artificial neural networks. IEEE Circ. Devices Mag. 4(5), 3–10 (1988)CrossRefGoogle Scholar
  19. 19.
    Lin, S.H., Kung, S.Y., Lin, L.J.: Face recognition/detection by probabilistic decision-based neural network. IEEE Trans. Neural Netw. 8(1), 114–132 (1997)CrossRefGoogle Scholar
  20. 20.
    Hepner, G., Logan, T., Ritter, N., Bryant, N.: Artificial neural network classification using a minimal training set-comparison to conventional supervised classification. Photogram. Eng. Remote Sens. 56(4), 469–473 (1990)Google Scholar
  21. 21.
    Park, D.C., El-Sharkawi, M.A., Marks, R.J.: An adaptively trained neural network. IEEE Trans. Neural Netw. 2(3), 334–345 (1991)CrossRefGoogle Scholar
  22. 22.
    LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)CrossRefGoogle Scholar
  23. 23.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp. 1097–1105 (2012)Google Scholar
  24. 24.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  25. 25.
    LeCun, Y., Huang, F.J., Bottou, L.: Learning methods for generic object recognition with invariance to pose and lighting. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 2, p. II-104. IEEE (2004)Google Scholar
  26. 26.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  27. 27.
    Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012)
  28. 28.
    Abdel-Hamid, O., Deng, L., Yu, D.: Exploring convolutional neural network structures and optimization techniques for speech recognition. In: Interspeech, pp. 3366–3370 (2013)Google Scholar
  29. 29.
    LeCun, Y., Bengio, Y., et al.: Convolutional networks for images, speech, and time series. In: The Handbook of Brain Theory and Neural Networks, 3361 pp. (1995). 10Google Scholar
  30. 30.
    Kuhlenbäumer, F., Polis, S., Przybysz, P., Mütze-Niewöhner, S.: Age-differentiated analysis of the influence of the duration of breaks on learning sensorimotor tasks. In: IEEE International Conference on Industrial Engineering and Engineering Management, IEEM2017, 10 December 2017–13 December 2017 (2017)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Institute of Industrial Engineering and Ergonomics of RWTH Aachen UniversityAachenGermany

Personalised recommendations