Advertisement

Occlusion Detection in Visual Tracking: A New Framework and A New Benchmark

  • Xiaoguang Niu
  • Yueyang Gu
  • Zhifeng Lu
  • Zehua Hong
  • Yi Tian
  • Kuan Xu
  • Jie Yang
  • Xingqi Fang
  • Yu QiaoEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11304)

Abstract

Occlusion remains being a challenge in visual object tracking. The robustness to occlusion is critical for tracking algorithms, though not much attention has been paid to it. In this paper, we first propose an occlusion detection framework which calculates the proportion of the target that is occluded, hence to decide whether to update the model of target. This framework can be integrated with existing tracking algorithms to increase their robustness to occlusion. Then we introduce a new benchmark which contains sequences where occlusion is the main difficulty. The sequences are chosen from public benchmarks and are fully annotated. The proposed framework is combined with several standard trackers and evaluated on the new benchmark. The experimental results show that our framework can improve the tracking performance, with explicit incorporation of occlusion detection.

Keywords

Visual tracking Occlusion detection Benchmark 

References

  1. 1.
    Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O., Torr, P.H.: Staple: complementary learners for real-time tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1401–1409 (2016)Google Scholar
  2. 2.
    Bolme, D.S., Beveridge, J.R., Draper, B.A., Lui, Y.M.: Visual object tracking using adaptive correlation filters. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2544–2550. IEEE (2010)Google Scholar
  3. 3.
    Danelljan, M., Häger, G., Khan, F., Felsberg, M.: Accurate scale estimation for robust visual tracking. In: British Machine Vision Conference (BMVC). BMVA Press, Nottingham, 1–5 September 2014Google Scholar
  4. 4.
    Galoogahi, H.K., Fagg, A., Huang, C., Ramanan, D., Lucey, S.: Need for speed: a benchmark for higher frame rate object tracking. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1134–1143 (2017)Google Scholar
  5. 5.
    Gu, K., Zhou, T., Liu, F., Yang, J., Qiao, Y.: Correlation filter tracking via bootstrap learning. In: IEEE International Conference on Image Processing, pp. 459–463 (2016)Google Scholar
  6. 6.
    Gu, K., Zhou, T., Liu, F., Yang, J., Qiao, Y.: Patch-based object tracking via locality-constrained linear coding. In: Proceedings of the 35th Chinese Control Conference, pp. 7015–7020 (2016)Google Scholar
  7. 7.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)CrossRefGoogle Scholar
  8. 8.
    Kristan, M., Leonardis, A., Matas, J., Felsberg, M.: The visual object tracking VOT2017 challenge results. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, pp. 1949–1972 (2017)Google Scholar
  9. 9.
    Kristan, M., et al.: A novel performance evaluation methodology for single-target trackers. IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2137–2155 (2016)CrossRefGoogle Scholar
  10. 10.
    Kristan, M., Pflugfelder, R., Leonardis, A., Matas, J., Porikli, F., Čehovin, L.: The visual object tracking vot2013 challenge results. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, pp. 564–586, December 2013Google Scholar
  11. 11.
    Li, A., Lin, M., Wu, Y., Yang, M., Yan, S.: Nus-pro: a new visual tracking challenge. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 335–349 (2016)CrossRefGoogle Scholar
  12. 12.
    Li, Q., Qiao, Y., Yang, J.: Robust visual tracking based on local kernelized representation. In: IEEE International Conference on Robiotics and Biomimetics, pp. 2523–2528 (2014)Google Scholar
  13. 13.
    Li, Q., Qiao, Y., Yang, J., Bai, L.: Robust visual tracking based on online learning of joint sparse dictionary. In: International Conference on Machine Vision (2013)Google Scholar
  14. 14.
    Li, Y., Zhu, J.: A scale adaptive kernel correlation filter tracker with feature integration. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8926, pp. 254–265. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-16181-5_18CrossRefGoogle Scholar
  15. 15.
    Niu, X., Cui, Z., Geng, S., Yang, J., Qiao, Y.: Robust visual tracking via occlusion detection based on depth-layer information. In: International Conference on Neural Information Processing, pp. 44–53 (2017)CrossRefGoogle Scholar
  16. 16.
    Niu, X., Fang, X., Qiao, Y.: Robust visual tracking via occlusion detection based on staple algorithm. In: Asian Control Conference, pp. 1051–1056 (2017)Google Scholar
  17. 17.
    Niu, X., Qiao, Y.: Context-based occlusion detection for robust visual tracking. In: IEEE International Conference on Image Processing, pp. 3655–3659 (2017)Google Scholar
  18. 18.
    Rozumnyi, D., Kotera, J., Sroubek, F., Novotn, L., Matas, J.: The world of fast moving objects. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)Google Scholar
  19. 19.
    Smeulders, A.W.M., Chu, D.M., Cucchiara, R., Calderara, S., Dehghan, A., Shah, M.: Visual tracking: an experimental survey. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1442–1468 (2014)CrossRefGoogle Scholar
  20. 20.
    Wang, M., Liu, Y., Huang, Z.: Large margin object tracking with circulant feature maps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 21–26 (2017)Google Scholar
  21. 21.
    Wu, Y., Lim, J., Yang, M.H.: Online object tracking: a benchmark. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2411–2418 (2013)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Xiaoguang Niu
    • 1
  • Yueyang Gu
    • 1
  • Zhifeng Lu
    • 2
  • Zehua Hong
    • 2
  • Yi Tian
    • 2
  • Kuan Xu
    • 1
  • Jie Yang
    • 1
  • Xingqi Fang
    • 1
  • Yu Qiao
    • 1
    Email author
  1. 1.Institute of Image Processing and Pattern Recognition, Department of AutomationShanghai Jiao Tong UniversityShanghaiChina
  2. 2.Shanghai Electro-Mechanical Engineering InstituteShanghaiChina

Personalised recommendations