Industrial Robot Sorting System for Municipal Solid Waste
For the problem of low efficiency of industrial sorting robots that use traditional visual algorithms to identify and locate targets in complex environments. Our system introduces deep learning technology to detect and locate solid waste based on the existing algorithm. In this paper, the industrial robot sorting system platform is built by deep learning technology. Firstly, the visual area on the conveyor belt is captured by the depth camera. The computer uses a trained SSD model to recognize and locate the target, and obtain the information on the type and location of the solid waste. Then, based on the object detection, solid waste objects are segmented by three-dimensional background removal. Finally, the information of the geometric center coordinates and the angle of the long side of the target object are sent to the robot to complete the classification and grabbing of the solid waste. Simulation experiments show that the features learned by using SSD deep neural network have strong robustness and stability in complex environment, and can achieve the solid waste sorting efficiently.
KeywordsComplex background Robotic grasping Deep learning Garbage sorting
This work was supported National Natural Science Foundation of China (61876167 and U1509207).
- 1.Haoyang, Y.U.: Reaerach on Montion Control of Servo Arm based on Binocular Vision. Dalian University of Technology, Dalian (2016). (in Chinese)Google Scholar
- 2.Maitin-Shepard, J., Cusumano-Towner, M., Lei, J., et al.: Cloth grasp point detection based on multiple-view geometric cues with application to robotic towel folding. In: IEEE International Conference on Robotics and Automation, Piscataway, pp. 2308–2315. IEEE, USA (2010)Google Scholar
- 3.Ramisa, A., Alenyà, G., Moreno-Noguer, F., et al.: Using depth and appearance features for informed robot grasping of highly wrinkled clothes. In: 2012 IEEE International Conference on Robotics and Automation. IEEE (2012)Google Scholar
- 4.Yun, J., Moseson, S., Saxena, A.: Efficient grasping from RGBD images: Learning using a new rectangle representation. In: IEEE International Conference on Robotics & Automation (2011)Google Scholar
- 6.Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: unified, real-time object detection (2015)Google Scholar
- 7.Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 21–26 July 2017, pp. 6517–6525 (2016)Google Scholar
- 8.Zhe, Y., Xuedan, D., Miao, C., et al.: A method for robotic grasping position detection based on deep learning. Chinese High Technology Letters (2018)Google Scholar
- 9.Zhihong, C., Hebin, Z., Yanbo, W., et al.: A vision-based robotic grasping system using deep learning for garbage sorting. In: 2017 36th Chinese Control Conference (CCC). IEEE (2017)Google Scholar
- 10.Wang, C., Liu, S., Zhang, J., et al.: RGB-D based object segmentation in severe color degraded environment (2017)Google Scholar
- 11.Qiu, Y., Chen, J., Guo, J., et al.: Three dimensional object segmentation based on spatial adaptive projection for solid waste (2017)Google Scholar
- 14.Vasudevan, S.K., Dharmendra, T., Sivaraman, R., Karthick, S.: Automotive image processing technique using canny’s edge detectior. Int. J. Eng. Sci. Technol. 2(7), 2632–2643 (2011)Google Scholar