3D Object Global Localization Using Particle Filter with Back Projection-based Sampling on Saliency
Estimating the 3D pose of a target object using particle filter has an important problem of high dimensional search space. Because the objects probably appear anywhere in the search space along the camera ray in the 3D world space, a huge number of samples covering the whole search space are required, which necessitates costly expensive computation time and iterations until convergence. For this reason, we propose a particle filter base on back projection sampling on saliency technique. We obtain the object boundaries as foreground regions using saliency segmentation based on color and depth information that is robust to complex environments. Moreover, we apply the particle filter with sampling, which is based on the back projection technique, using the concept of a relationship between 3D world space and the 2D image plane. The sampling dimension of whole samples along the camera ray can be omitted by generating the samples in the 2D image plane on saliencies before they are back projected into 3D world space using depth information. The required number of samples and iterations are drastically decreased. In addition, our method can perceive the salient regions that may be the region of the target object. Most of the samples will be predicted into these promising regions that make the algorithm converges rapidly.
Keywords3D pose tracking back projection-based sampling particle filter RGB-D saliency segmentation
Unable to display preview. Download preview PDF.
- D. Sidibé, D. Fofi, and F. Mériaudeau, “Using visual saliency for object tracking with particle filters,” Proc. of 18th European Signal Processing Conference, pp. 1776–1780, August 2010.Google Scholar
- Y. Yuan, C. Gao, Q. Liu, J. Wang, and C. Zhang, “Using local saliency for object tracking with particle filters,” IEEE International Conference on Signal Processing, Communications and Computing, pp. 388–393, August 2014.Google Scholar
- L. Jiang, A. Koch, and A. Zell, “Salient regions detection for indoor robots using RGB-D data,” Proc. of IEEE International Conference on Robotics and Automation, pp. 1323–1328, May 2015.Google Scholar
- J. Guo, T. Ren, and J. Bei, “Salient object detection for RGB-D image via saliency evolution,” IEEE International Conference Multimedia and Expo, July 2016.Google Scholar
- E. Yörük and R. Vidal “Efficient object localization and pose estimation with 3D wireframe models,” Proc. of IEEE International Conference on Computer Vision Workshops, pp. 538–545, December 2013.Google Scholar
- A. Pirayawaraporn, N. Chindakham, and M. H. Jeong, “Object tracking using particle filter with back projection-based sampling on saliency,” Proc. of IEEE Int. Conf. on Control, Automation and Systems (ICCAS), pp. 1696–1698, October 2017.Google Scholar
- E. Hashemi, M. G. Jadid, M. Lashgarian, M. Yaghobi, and M. Shafiei, “Particle filter based localization of the Nao biped robots,” Proc. of the 44th IEEE Southeastern Symposium on System Theory, pp. 168–173, March 2012.Google Scholar
- A. Dakkak and A. Husain, “Recovering missing depth information from Microsoft’s Kinect,” Proc. Embedded Vis. Alliance, pp. 1–9, 2012.Google Scholar
- F. Perazzi, P. Krähenbühl, Y. Pritch, and A. Hornung, “Saliency filters: contrast based filtering for salient region detection,” Proc. of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 733–740, June 2012.Google Scholar
- W. Zhu, S. Liang, Y. W. Wei, and J. Sun, “Saliency optimization from robust background detection,” IEEE International Conference on Computer Vision and Pattern Recognition, June 2014.Google Scholar
- A. Blake and M. Isard, Active Contours - The Application of Techniques from Graphics, Vision,Control Theory and Statistics to Visual Tracking of Shapes in Motion, Springer, pp. 99–100, 2000.Google Scholar