3-D shape recognitions of target objects for stacked rubble withdrawal works performed by rescue robots

  • Masatoshi HatanoEmail author
  • Toshifumi Fujii
Original Article


In this research, we aim to develop a method to recognize three dimensional shape of stacked rubbles each by each for rubble withdrawal rescue robots. Shapes, masses, states of stacked rubbles and so on are various and unknown at disaster areas. Then, grasping positions on rubbles and ways to remove them have to be considered for not breaking down the stacked rubbles and falling them down on victims. Thus, it is necessary to recognize stacked rubble individually and to identify their features, such as shapes, masses, center of gravity positions and so on. In this paper, we propose a 3-D object shape recognition system with a RGB-D sensor and a 3-D reference marker. Moreover, we also propose an extraction method of rubbles using the SSD (Single Shot Multi Box Detector) of the AI (Artificial Intelligence). Experiments were performed to confirm the validity of the proposed method with our constructed prototype of a rescue robot. Through the experiments, it is shown that target stacked rubbles were recognized individually.


Rescue robot Rubble withdrawal Shape recognition Single shot multi box detector 



  1. 1.
    Tian G, Ren Y, Zhou M (2016) Dual-objective scheduling of rescue vehicles to distinguish forest fires via differential evolution and particle swarm optimization combined algorithm. IEEE Trans Intell Transp Syst 2016, 3009–3021CrossRefGoogle Scholar
  2. 2.
    Tanaka M, Nakajima M, Suzuki Y, Tanaka K (2018) Development and control of articulated mobile robot for climbing steep stairs. IEEE/ASME Trans Mech 2018: 31–541Google Scholar
  3. 3.
    Wang J, Sato K, Guo S, Chen W, Wu J (2019) Big data processing with minimal delay and guaranteed data resolution in disaster areas. IEEE Trans Vehic Technol 3833–3842CrossRefGoogle Scholar
  4. 4.
    Fu S, Liu H, Gao L, Gai Y (2007) SLAM for mobile robots using laser range finder and monocular vision. In: Proceedings of 2007 14th international conference on mechatronics and machine vision, Xiamen, Chaina, Dec 4–6, 2007, pp 91–96Google Scholar
  5. 5.
    Sturm J, Engelhard N, Endres F, Burgard W, Cremers D (2012) A benchmark for the evaluation of RGB-D SLAM systems. In: Proceedings of 2012 IEEE/RSJ international conference on intelligent robots and systems, Vilamoura, Portugal, Oct 7–12, 2012, pp 573–580Google Scholar
  6. 6.
    Stamos I, Leordeanu M (2003) Automated feature-based range registration of urban scenes of large scale. In: Proceedings of 2003 IEEE computer society conference on computer vision and pattern recognition, Madison, WI, USA, USA, June 18–20, 2003, pp 555–561Google Scholar
  7. 7.
    Ye Q, Yao Y, Gui P, Lin Y (2016) An improved ICP algorithm for kinect point cloud registration. In: Proceedings of 2016 12th international conference on natural computation, fuzzy systems and knowledge discovery, Changsha, China, Aug 13–15, 2016, pp 2109–2114Google Scholar
  8. 8.
    Hisahara H, Chin Y, Hane S, Ogitsu T, Takemura H, Mizoguchi H (2015) 3D point cloud-based virtual environment for safe testing of robot control. In: Proceedings of 2015 6th international conference on intelligent systems, modelling and simulation, Kuala Lumpur, Malaysia, Feb 9–12, 2015, pp 24–27Google Scholar
  9. 9.
    Li S, Wang J, Liang Z, Su L (2016) Tree point cloud registration using an improved ICP algorithm based on Kd-tree. In: Proceedings of 2016 IEEE international geoscience and remote sensing symposium, Beijing, China, July 10–15, 2016, pp 4545–4548Google Scholar
  10. 10.
    Sorgi L (2011) Two-view geometry estimation using the rodrigues rotation formula. In: 2011 18th IEEE international conference on image processing, Brussels, Belgium, Sept 11–14, pp 1009–1012Google Scholar
  11. 11.
    Ning C, Zhou H, Song Y, Tang J (2017) Inception single shot multibox detector for object detection. In: 2017 IEEE international conference on multimedia and expo workships, Hong Kong, China, July 10–14, 2017, pp 549–554Google Scholar

Copyright information

© International Society of Artificial Life and Robotics (ISAROB) 2019

Authors and Affiliations

  1. 1.FunabashiJapan

Personalised recommendations