Advertisement

Multimedia Tools and Applications

, Volume 77, Issue 20, pp 26485–26507 | Cite as

Accurate target tracking via Gaussian sparsity and locality-constrained coding in heavy occlusion

  • Zhijian Yin
  • Linhan Dai
  • Huilin Xiong
  • Fan Yang
  • Zhen Yang
Article
  • 95 Downloads

Abstract

This paper presents a Gaussian sparse representation cooperative model for tracking a target in heavy occlusion video sequences by combining sparse coding and locality-constrained linear coding algorithms. Different from the usual method of using 1-norm regularization term in the framework of particle filters to form the sparse collaborative appearance model (SCM), we employed the 1-norm and 2-norm to calculate feature selection, and then encoded the candidate samples to generate the sparse coefficients. Consequently, our method not only easily obtained sparse solutions but also reduced reconstruction error. Compared to state-of-the-art algorithms, our scheme achieved better performance in heavy occlusion video sequences for tracking a target. Extensive experiments on target tracking were carried out to show the results of our proposed algorithm compared with various other target tracking methods.

Keywords

Target tracking Sparse coding Locality-constrained linear coding Gaussian sparse representation Cooperative model 

Notes

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China of (61603161, 61650402); a Natural Science Foundation of Jiangxi Province of China (20151BAB207049); a Key Science Foundation of Educational Commission of Jiangxi Province of China (GJJ160768); a scholastic youth talent support program of Jiangxi Science and Technology Normal University (2016QNBJRC004); the Natural Science Foundation of Jiangxi Province Key Laboratory of Water Information Cooperative Sensing and Intelligent Processing (2016WICSIP031); and the Key Science Foundation of Jiangxi Science and Technology Normal University (2014XJZD002). We would like to thank LetPub (www.letpub.com) for providing linguistic assistance during the preparation of this manuscript.

References

  1. 1.
    Ahuja N, Liu S, Ghanem B, Zhang T (2012) Robust visual tracking via multi-task sparse learning. In: IEEE conference on computer vision and pattern recognition, pp 2042–2049Google Scholar
  2. 2.
    Babenko B, Yang MH, Belongie S (2011) Robust object tracking with online multiple instance learning. IEEE Trans Pattern Anal Mach Intell 33(8):1619–1632CrossRefGoogle Scholar
  3. 3.
    Bai Q, Wu Z, Sclaroff S, Betke M, Monnier C (2013) Randomized ensemble tracking. In: IEEE international conference on computer vision, pp 2040–2047Google Scholar
  4. 4.
    Bertinetto L, Valmadre J, Golodetz S, Miksik O, Torr PHS (2015) Staple: complementary learners for real-time tracking, pp 1401–1409Google Scholar
  5. 5.
    Cehovin L, Leonardis A, Kristan M (2016) Visual object tracking performance measures revisited. IEEE Trans Image Process 25(3):1261–1274MathSciNetGoogle Scholar
  6. 6.
    Chang X, Yu YL, Yang Y, Xing EP (2016) Semantic pooling for complex event analysis in untrimmed videos. IEEE Trans Pattern Anal Mach Intell 39(8):1617–1632CrossRefGoogle Scholar
  7. 7.
    Chang X, Ma Z, Lin M, Yang Y, Hauptmann A (2017) Feature interaction augmented sparse learning for fast kinect motion detection. IEEE Trans Image Process 26(8):3911–3920MathSciNetCrossRefGoogle Scholar
  8. 8.
    Chang X, Ma Z, Yi Y, Zeng Z, Hauptmann AG (2017) Bi-level semantic representation analysis for multimedia event detection. IEEE Trans Cybern 47(5):1180–1197CrossRefGoogle Scholar
  9. 9.
    Han G, Wang X, Liu J, Sun N, Wang C (2016) Robust object tracking based on local region sparse appearance model. Neurocomputing 184(C):145–167CrossRefGoogle Scholar
  10. 10.
    Hare S, Saffari A, Torr PHS (2011) Struck: structured output tracking with kernels. In: IEEE international conference on computer vision, pp 263–270Google Scholar
  11. 11.
    Henriques JF, Rui C, Martins P, Batista J (2012) Exploiting the circulant structure of tracking-by-detection with kernels. In: European conference on computer vision, pp 702–715Google Scholar
  12. 12.
    Henriques JF, Rui C, Martins P, Batista J (2014) High-speed tracking with kernelized correlation filters. IEEE Trans Pattern Anal Mach Intell 37(3):583–596CrossRefGoogle Scholar
  13. 13.
    Ji H (2012) Real time robust l1 tracker using accelerated proximal gradient approach. In: IEEE conference on computer vision and pattern recognition, pp 1830–1837Google Scholar
  14. 14.
    Ji J, Ji H, Bai M (2015) Sparse representation with regularization term for face recognition. Springer, BerlinCrossRefGoogle Scholar
  15. 15.
    Kalal Z, Mikolajczyk K, Matas J (2011) Tracking-learning-detection. IEEE Trans Pattern Anal Mach Intell 34(7):1409–22CrossRefGoogle Scholar
  16. 16.
    Li FF, Perona P (2005) A bayesian hierarchical model for learning natural scene categories. In: IEEE computer society conference on computer vision and pattern recognition, pp 524–531Google Scholar
  17. 17.
    Liang P, Pang Y, Liao C, Mei X, Ling H (2016) Adaptive objectness for object tracking. IEEE Signal Process Lett 23(7):949–953CrossRefGoogle Scholar
  18. 18.
    Ling H (2012) Online robust image alignment via iterative convex optimization. In: IEEE conference on computer vision and pattern recognition, pp 1808–1814Google Scholar
  19. 19.
    Liu B, Huang J, Yang L, Kulikowsk C (2011) Robust tracking using local sparse appearance model and k-selection. In: IEEE conference on computer vision and pattern recognition, pp 1313–1320Google Scholar
  20. 20.
    Mei X, Ling H (2009) Robust visual tracking using l1 minimization. In: IEEE international conference on computer vision, pp 1436–1443Google Scholar
  21. 21.
    Nam H, Han B (2016) Learning multi-domain convolutional neural networks for visual tracking. In: Computer vision and pattern recognition, pp 4293–4302Google Scholar
  22. 22.
    Ou W, Yuan D, Liu Q, Cao Y (2017) Object tracking based on online representative sample selection via non-negative least square. Multimed Tools Appl, 1–19Google Scholar
  23. 23.
    Perez P, Hue C, Vermaak J, Gangnet M (2002) Color-based probabilistic tracking. In: Computer vision - ECCV 2002, European conference on computer vision, Copenhagen, Denmark, May 28-31, 2002, proceedings, pp 661–675Google Scholar
  24. 24.
    Ross DA, Lim J, Lin RS, Yang MH (2008) Incremental learning for robust visual tracking. Int J Comput Vis 77(1):125–141CrossRefGoogle Scholar
  25. 25.
    Scholkopf B, Platt J, Hofmann T (2006) Efficient sparse coding algorithms. In: International conference on neural information processing systems, pp 801–808Google Scholar
  26. 26.
    Sevilla-Lara L, Learned-Miller E (2012) Distribution fields for tracking. In: IEEE Conference on computer vision and pattern recognition, pp 1910–1917Google Scholar
  27. 27.
    Tang F, Brennan S, Zhao Q, Tao H (2007) Co-tracking using semi-supervised support vector machines. In: IEEE international conference on computer vision, pp 1–8Google Scholar
  28. 28.
    Wang Q (2011) An experimental comparison of online object-tracking algorithms. Proc SPIE - Int Soc Opt Eng 8138(3):815–822Google Scholar
  29. 29.
    Wang J, Yang J, Yu K, Lv F, Huang T, Gong Y (2010) Locality-constrained linear coding for image classification. In: IEEE conference on computer vision and pattern recognition, pp 3360– 3367Google Scholar
  30. 30.
    Wang Q, Chen F, Xu W, Yang MH (2012) Online discriminative object tracking with local sparse representation. In: Applications of computer vision, pp 425–432Google Scholar
  31. 31.
    Wang D, Lu H, Yang MH (2013) Online object tracking with sparse prototypes. IEEE Trans Image Process Publ IEEE Signal Process Soc 22(1):314MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Wang L, Ouyang W, Wang X, Lu H (2016) Stct: sequentially training convolutional networks for visual tracking. In: Computer vision and pattern recognition, pp 1373–1381Google Scholar
  33. 33.
    Wu Y, Lim J, Yang MH (2013) Online object tracking: a benchmark. In: IEEE conference on computer vision and pattern recognition, pp 2411–2418Google Scholar
  34. 34.
    Yang MH, Lu H, Zhong W (2012) Robust object tracking via sparsity-based collaborative model. In: Computer vision and pattern recognition, pp 1838–1845Google Scholar
  35. 35.
    Yang Z, Du B, Xiong H (2014) Gaussian distance coding for image classification. In: International conference on computer science and network technology, pp 517–520Google Scholar
  36. 36.
    Yang Y, Xie Y, Zhang W, Hu W, Tan Y (2015) Global coupled learning and local consistencies ensuring for sparse-based tracking. Neurocomputing 160(C):191–205CrossRefGoogle Scholar
  37. 37.
    Yang Y, Hu W, Xie Y, Zhang W, Zhang T (2016) Temporal restricted visual tracking via reverse-low-rank sparse learning. IEEE Trans Syst Man Cybern, 1–14Google Scholar
  38. 38.
    Yang H, Qu S, Zheng Z (2017) Visual tracking via online discriminative multiple instance metric learning. Multimed Tools Appl, 1–19Google Scholar
  39. 39.
    Yang Y, Chen N, Jiang S (2017) Collaborative strategy for visual object tracking. Multimed Tools Appl, 1–21Google Scholar
  40. 40.
    Zhang K, Zhang L, Yang MH (2012) Real-time compressive tracking. In: European conference on computer vision, pp 864–877Google Scholar
  41. 41.
    Zhang C, Liu R, Qiu T, Su Z (2014) Robust visual tracking via incremental low-rank features learning. Neurocomputing 131(131):237–247CrossRefGoogle Scholar
  42. 42.
    Zhang K, Liu Q, Wu Y, Yang MH (2015) Robust visual tracking via convolutional networks without training. IEEE Trans Image Process Publ IEEE Signal Process Soc 25(4):1779–1792MathSciNetGoogle Scholar
  43. 43.
    Zhang T, Liu S, Ahuja N, Yang MH, Ghanem B (2015) Robust visual tracking via consistent low-rank sparse learning. Int J Comput Vis 111(2):171–190CrossRefGoogle Scholar
  44. 44.
    Zhong W, Lu H, Yang MH (2014) Robust object tracking via sparse collaborative appearance model. IEEE Trans Image Process Publ IEEE Signal Process Soc 23(5):2356–2368MathSciNetCrossRefzbMATHGoogle Scholar
  45. 45.
    Zhou Y, Bai X, Liu W, Latecki LJ (2016) Similarity fusion for visual tracking. Int J Comput Vis 118(3):337–363MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Zhijian Yin
    • 1
  • Linhan Dai
    • 1
  • Huilin Xiong
    • 2
  • Fan Yang
    • 1
  • Zhen Yang
    • 1
  1. 1.School of Communication and ElectronicsJiangxi Science and Technology Normal UniversityNanchangChina
  2. 2.Department of AutomationShanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations