Collaborative model with adaptive selection scheme for visual tracking

  • Tianshan Liu
  • Jun KongEmail author
  • Min Jiang
  • Chenhua Liu
  • Xiaofeng Gu
  • Xiaofeng Wang
Original Article


Visual tracking is a challenging task since it involves developing an effective appearance model to deal with numerous factors. In this paper, we propose a robust object tracking algorithm based on a collaborative model with adaptive selection scheme. Specifically, based on the discriminative features selected from the feature selection scheme, we develop a sparse discriminative model (SDM) by introducing a confidence measure strategy. In addition, we present a sparse generative model (SGM) by combining ℓ1 regularization with PCA reconstruction. In contrast to existing hybrid generative discriminative tracking algorithms, we propose a novel adaptive selection scheme based on the Euclidean distance as the joint mechanism, which helps to construct a more reasonable likelihood function for our collaborative model. Experimental results on several challenging image sequences demonstrate that the proposed tracking algorithm leads to a more favorable performance compared with the state-of-the-art methods.


Visual tracking Collaborative model Adaptive selection scheme Sparse representation 



This work was partially supported by the National Natural Science Foundation of China (61362030, 61201429), the Project Funded by China Postdoctoral Science Foundation (2015M581720, 2016M600360), the Project Funded by Jiangsu Postdoctoral Science Foundation (1601216C) and Technology Research Project of the Ministry of Public Security of China (2014JSYJB007).

Supplementary material

Supplementary material 1 (MP4 5176 KB)


  1. 1.
    Yilmaz A, Javed O, Shah M (2006) Object tracking: a survey. Acm Comput Surv 38(4):81–93CrossRefGoogle Scholar
  2. 2.
    Li X et al. (2013) A survey of appearance models in visual object tracking. Acm Trans Intell Syst Technol 4(4):478–488CrossRefGoogle Scholar
  3. 3.
    Avidan S, Ensemble tracking (2007) IEEE Trans Pattern Anal Mach Intell 29(2):261–271CrossRefGoogle Scholar
  4. 4.
    Grabner H, Bischof H (2006) On-line boosting and vision. In: IEEE computer society conference on computer vision and pattern recognition, pp 260–267Google Scholar
  5. 5.
    Grabner H, Leistner C, Bischof H (2008) Semi-supervised on-line boosting for robust tracking. In: European conference on computer vision, pp 234–247Google Scholar
  6. 6.
    Babenko B, Yang MH, Belongie S (2011) Robust object tracking with online multiple instance learning. IEEE Trans Pattern Anal Mach Intell 33(8):1619–1632CrossRefGoogle Scholar
  7. 7.
    Kalal Z, Mikolajczyk K, Matas J (2012) Tracking-learning-detection. IEEE Trans Pattern Anal Mach Intell 34(7):1409–1422CrossRefGoogle Scholar
  8. 8.
    Jiang N, Liu W, Wu Y (2011) Learning adaptive metric for robust visual tracking. IEEE Trans Image Process 20(8):2288–2300MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Zhuang, Lu H, Xiao Z (2014) Visual tracking via discriminative sparse similarity map. IEEE Trans Image Process 23(4):1872–1881MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Wu G, Zhao C, Lu W et al (2015) Efficient structured L1 tracker based on laplacian error distribution. Int J Mach Learn Cybern 6(4):581–595CrossRefGoogle Scholar
  11. 11.
    Adam A, Rivlin E, Shimshoni I (2006) Robust fragments-based tracking using the integral histogram. In: IEEE computer society conference on computer vision and pattern recognition, pp 798–805Google Scholar
  12. 12.
    Ross J, Lim J, Lin RS, Yang M (2008) Incremental learning for robust visual tracking. Int J Comput Vision 77(1–3):125–141CrossRefGoogle Scholar
  13. 13.
    Kong J, Liu C, Jiang M, Wu J, Tian S, Lai H (2016) Generalized P-regularized representation for visual tracking. Neurocomputing 213:155–161CrossRefGoogle Scholar
  14. 14.
    Mei X, Ling H (2009) Robust visual tracking using L1 minimization. In: IEEE international conference on computer vision, pp 1436–1443Google Scholar
  15. 15.
    Bao C, Wu Y, Ling H, Ji H (2012) Real time robust L1 tracker using accelerated proximal gradient approach. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 1830–1837Google Scholar
  16. 16.
    Wang, Lu H, Yang M-H (2013) Online object tracking with sparse prototypes. IEEE Trans Image Process 22(1):314–325MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Jia X, Lu H, Yang M (2012) Visual tracking via adaptive structural local sparse appearance model. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 1822–1829Google Scholar
  18. 18.
    Liu H, Yuan M, Sun F, Zhang J (2014) Spatial neighborhood-constrained linear coding for visual object tracking. IEEE Trans Ind Inf 10(1):469–480CrossRefGoogle Scholar
  19. 19.
    Liu R, Cheng J, Lu H (2009) A robust boosting tracker with minimum error bound in a co-training framework. Proceedings 30(2):1459–1466.Google Scholar
  20. 20.
    Dinh TB, Medioni GG (2011) Co-training framework of generative and discriminative trackers with partial occlusion handling. In: IEEE workshop on the applications of computer vision, pp 642–649.Google Scholar
  21. 21.
    Zhong W, Lu H, Yang M (2014) Robust object tracking via sparse collaborative appearance model. IEEE Trans Image Process 23(5):2356–2368MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Zhao L, Zhao Q, Chen Y, Lv P (2016) Combined discriminative global and generative local models for visual tracking. J Electron Imaging 25(2):023005CrossRefGoogle Scholar
  23. 23.
    Wang Q, Chen F, Xu W, Yang M-H (2012) Online discriminative object tracking with local sparse representation. In: IEEE workshop on the applications of computer vision, pp 425–432.Google Scholar
  24. 24.
    Zhang T, Ghanem B, Liu S, Ahuja N (2013) Robust visual tracking via structured multi-task sparse learning. Int J Comput Vis 101(2):367–383MathSciNetCrossRefGoogle Scholar
  25. 25.
    Zhang T, Ghanem B, Liu S, Ahuja N (2012) Low-rank sparse learning for robust visual tracking. In: European conference on computer vision, pp 2042–2049Google Scholar
  26. 26.
    Wang D, Lu H, Yang M (2016) Robust visual tracking via least soft-threshold squares. IEEE Trans Circuits Syst Video Technol 26(9):1709–1721CrossRefGoogle Scholar
  27. 27.
    Wang D, Lu H, Bo C (2015) Fast and robust object tracking via probability continuous outlier model. IEEE Trans Image Process 24(12):5166–5176MathSciNetCrossRefGoogle Scholar
  28. 28.
    Hale T, Yin W, Zhang Y (2008) Fixed-point continuation for ℓ1-minimization: methodology and convergence. Siam J Optim 19(3):1107–1130MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Wang D, Lu H, Bo C, Visual (2014) Tracking via weighted local cosine similarity. IEEE Trans Cybern 45(9):1838–1850CrossRefGoogle Scholar
  30. 30.
    Xiao Z, Lu H, Wang D (2014) L2-RLS-based object tracking. IEEE Trans Circuits Syst Video Technol 24(8):1301–1309CrossRefGoogle Scholar
  31. 31.
    Pan J, Lim J, Su Z, Yang M (2014) L0-regularized object representation for visual tracking. In: Proceedings British machine vision conferenceGoogle Scholar
  32. 32.
    Everingham M, Van Gool L, C. K. I. Williams, Winn J, Zisserman A (2010) The pascal visual object classes (voc) challenge. Int J Comput Vis 88(2):303–338CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany 2017

Authors and Affiliations

  • Tianshan Liu
    • 1
  • Jun Kong
    • 1
    • 2
    Email author
  • Min Jiang
    • 1
  • Chenhua Liu
    • 1
  • Xiaofeng Gu
    • 1
  • Xiaofeng Wang
    • 1
  1. 1.Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education)Jiangnan UniversityWuxiChina
  2. 2.College of Electrical EngineeringXinjiang UniversityUrumqiChina

Personalised recommendations