Advertisement

Industrial Robot Sorting System for Municipal Solid Waste

  • Zhifei ZhangEmail author
  • Hao Wang
  • Hongzhang Song
  • Shaobo Zhang
  • Jianhua Zhang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11741)

Abstract

For the problem of low efficiency of industrial sorting robots that use traditional visual algorithms to identify and locate targets in complex environments. Our system introduces deep learning technology to detect and locate solid waste based on the existing algorithm. In this paper, the industrial robot sorting system platform is built by deep learning technology. Firstly, the visual area on the conveyor belt is captured by the depth camera. The computer uses a trained SSD model to recognize and locate the target, and obtain the information on the type and location of the solid waste. Then, based on the object detection, solid waste objects are segmented by three-dimensional background removal. Finally, the information of the geometric center coordinates and the angle of the long side of the target object are sent to the robot to complete the classification and grabbing of the solid waste. Simulation experiments show that the features learned by using SSD deep neural network have strong robustness and stability in complex environment, and can achieve the solid waste sorting efficiently.

Keywords

Complex background Robotic grasping Deep learning Garbage sorting 

Notes

Acknowledgments

This work was supported National Natural Science Foundation of China (61876167 and U1509207).

References

  1. 1.
    Haoyang, Y.U.: Reaerach on Montion Control of Servo Arm based on Binocular Vision. Dalian University of Technology, Dalian (2016). (in Chinese)Google Scholar
  2. 2.
    Maitin-Shepard, J., Cusumano-Towner, M., Lei, J., et al.: Cloth grasp point detection based on multiple-view geometric cues with application to robotic towel folding. In: IEEE International Conference on Robotics and Automation, Piscataway, pp. 2308–2315. IEEE, USA (2010)Google Scholar
  3. 3.
    Ramisa, A., Alenyà, G., Moreno-Noguer, F., et al.: Using depth and appearance features for informed robot grasping of highly wrinkled clothes. In: 2012 IEEE International Conference on Robotics and Automation. IEEE (2012)Google Scholar
  4. 4.
    Yun, J., Moseson, S., Saxena, A.: Efficient grasping from RGBD images: Learning using a new rectangle representation. In: IEEE International Conference on Robotics & Automation (2011)Google Scholar
  5. 5.
    Lin, Y., Sun, Y.: Robot grasp planning based on demonstrated grasp strategies. Int. J. Robot. Res. 34(1), 26–42 (2015)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: unified, real-time object detection (2015)Google Scholar
  7. 7.
    Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 21–26 July 2017, pp. 6517–6525 (2016)Google Scholar
  8. 8.
    Zhe, Y., Xuedan, D., Miao, C., et al.: A method for robotic grasping position detection based on deep learning. Chinese High Technology Letters (2018)Google Scholar
  9. 9.
    Zhihong, C., Hebin, Z., Yanbo, W., et al.: A vision-based robotic grasping system using deep learning for garbage sorting. In: 2017 36th Chinese Control Conference (CCC). IEEE (2017)Google Scholar
  10. 10.
    Wang, C., Liu, S., Zhang, J., et al.: RGB-D based object segmentation in severe color degraded environment (2017)Google Scholar
  11. 11.
    Qiu, Y., Chen, J., Guo, J., et al.: Three dimensional object segmentation based on spatial adaptive projection for solid waste (2017)Google Scholar
  12. 12.
    Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)CrossRefGoogle Scholar
  13. 13.
    Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  14. 14.
    Vasudevan, S.K., Dharmendra, T., Sivaraman, R., Karthick, S.: Automotive image processing technique using canny’s edge detectior. Int. J. Eng. Sci. Technol. 2(7), 2632–2643 (2011)Google Scholar
  15. 15.
    Freeman, H., Shapira, R.: Determining the minimum-area encasing rectangle for an arbitrary closed curve. Commun. ACM 18(7), 409–413 (1975)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. Trans. Pami 22(8), 747–757 (2000)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Zhifei Zhang
    • 1
    Email author
  • Hao Wang
    • 1
  • Hongzhang Song
    • 2
  • Shaobo Zhang
    • 2
  • Jianhua Zhang
    • 1
  1. 1.College of Computer Science and TechnologyZhejiang University of TechnologyHangzhouChina
  2. 2.Hangzhou Visual Entropy Technology Co., Ltd.HangzhouChina

Personalised recommendations