Cluster Computing

, Volume 22, Supplement 3, pp 5739–5747 | Cite as

An object tracking algorithm based on optical flow and temporal–spatial context

  • Yongliang MaEmail author


Image object tracking, as one of the hot spots in computer vision, has made great progress recently. Nevertheless, there has been no algorithm that could show good robustness against all kinds of challenging video scenes. The tracking algorithm of temporal–spatial context effectively took advantage of the information contained in the background and the appearance of the object. By adopting this algorithm, good tracking effects has been achieved. However, such algorithm could easily lead to tracking failure in case of the object moving too fast or the object location changing too much. With Harris corner point adopted as the feature point, this paper corrected the tracking result of the STC tracking algorithm by using the L–K optical flow method as an auxiliary technique. Consequently, better tracking effects were achieved under the premise of preserving the excellent performance of the STC algorithm.


Local context Spatial–temporal context Optical flow Visual object tracking 



This paper is supported by the Soft Science Research Projects of Henan Province (Nos. 172400410614 & 172400410629) and Government Decision Making research Projects of Henan Province (No. 2016BC330).


  1. 1.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. International Joint Conference on Artificial Intelligence Morgan Kaufmann Publishers Inc. pp. 674–679 (1981)Google Scholar
  2. 2.
    Baker, S., Matthews, I.: Lucas-Kanade 20 years on: a unifying framework. Int. J. Comput. Vis 56(3), 221–255 (2004)CrossRefGoogle Scholar
  3. 3.
    Mei, X., Ling, H. Robust visual tracking using \(\ell \) 1, minimization. IEEE, International Conference on Computer Vision DBLP, pp. 1436–1443 (2009)Google Scholar
  4. 4.
    Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–575 (2003)CrossRefGoogle Scholar
  5. 5.
    Tang, F. et al.: Co-tracking using semi-supervised support vector machines. In: IEEE, International Conference on Computer Vision IEEE Xplore pp. 1–8 (2007)Google Scholar
  6. 6.
    Wu, Y. et al.: Real-time probabilistic covariance tracking with efficient model update. In: IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society 21.5 :2824–2837 (2012)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via o-line boosting. In: British Machine Vision Conference, Edinburgh, Uk, September DBLP pp. 47–56 (2006)Google Scholar
  8. 8.
    Moravec, H.P.: Visual mapping by a robot rover. In: International Joint Conference on Artificial Intelligence Morgan Kaufmann Publishers Inc. pp. 598–600 (1979)Google Scholar
  9. 9.
    Stephens, M.: A combined comer and edge detector. In; Alvey Vision Conference, pp; 147–151 (1998)Google Scholar
  10. 10.
    Shi, J. : Good features to track. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Cvpr CiteSeer, pp. 593–600 (1994)Google Scholar
  11. 11.
    Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision, pp. 91–110 (2004)CrossRefGoogle Scholar
  12. 12.
    Avidan, S.: Support vector tracking. IEEE Trans. Pattern Anal. Mach. Intell. 26(8), 1064–1072 (2004)CrossRefGoogle Scholar
  13. 13.
    Fisher, R.B.: The PETS04 surveillance ground-truth data sets. In: Proceedings of the Sixth IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS04 (2004)Google Scholar
  14. 14.
    Babenko, B., Yang, M.H., Belongie, S.: Visual tracking with online Multiple Instance Learning. Computer Vision and Pattern Recognition, CVPR 2009. In: IEEE Conference on IEEE pp. 983–990 (2009)Google Scholar
  15. 15.
    Wang, N., Yeung, D.Y.: Learning a deep compact image representation for visual tracking. Adv. Neural Inf. Process. Syst. 18, 809–817 (2013)Google Scholar
  16. 16.
    Jepson, A.D., Fleet, D.J., Elmaraghi, T.F.: Robust online appearance models for visual tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(10), 1296–1311 (2003)CrossRefGoogle Scholar
  17. 17.
    Kalal, Z.: Tracking-learning-detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1409–1422 (2012)CrossRefGoogle Scholar
  18. 18.
    Bolme, D.S., et al.: Visual object tracking using adaptive correlation filters. In: Computer Vision and Pattern Recognition IEEE, pp. 2544–2550 (2010)Google Scholar
  19. 19.
    Grabner, H., et al.: Tracking the invisible: learning where the object might be. IEEE 26(2), 1285–1292 (2010)Google Scholar
  20. 20.
    Yang, M., Wu, Y., Hua, G.: Context-aware visual tracking. IEEE Transactions PAMI pp. 1195–1209 (2008)Google Scholar
  21. 21.
    Zhang, K., et al.: Fast visual tracking via dense spatio-temporal context learning. Computer Vision – ECCV 2014. Springer International Publishing, pp. 127–141 (2014)Google Scholar
  22. 22.
    Sundaram, N., Brox, T., Keutzer, K.: Dense point trajectories by GPU-accelerated large displacement optical flow. Computer Vision—ECCV 2010, European Conference on Computer Vision, Heraklion, Crete, Greece, September 5–11, 2010, Proceedings DBLP, pp. 438–451 (2010)Google Scholar
  23. 23.
    Wu, Y., Lim, J., Yang, M.H.: Online object tracking: a benchmark. IEEE Conference on Computer Vision and Pattern Recognition IEEE Computer Society, pp. 2411–2418 (2013)Google Scholar
  24. 24.
    Qingyuan, Z., Jianjian L.: The study on evaluation method of urban network security in the big data era. Intell. Autom. Soft Comput. (2017). CrossRefGoogle Scholar
  25. 25.
    Ranran, L., Enxing, Z., Shan, C., Shaoyi, B., Lanchun, Z.: Hierarchical stochastic gradient identification for non-uniformly sampling Hammerstein systems with colored noise. Comput. Syst. Sci. Eng. 31(6), 425–430 (2016)Google Scholar
  26. 26.
    Liu, R., Xu, H., Zheng, E., et al.: Adaptive filtering for intelligent sensing speech based on multi-rate LMS algorithm. Clust. Comput 20(2), 1493–1509 (2017)CrossRefGoogle Scholar
  27. 27.
    Liu, R., Pan, T., Li, Z.: Multi-model recursive identification for nonlinear systems with non-uniformly sampling. Clust. Comput. 20(1), 25–32 (2017)CrossRefGoogle Scholar
  28. 28.
    Divvala, S.K.: An empirical study of context in object detection. Computer Vision and Pattern Recognition, CVPR 2009. IEEE Conference on IEEE, pp. 1271–1278 (2009)Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2017

Authors and Affiliations

  1. 1.North China University of Water Resources and Electric PowerHenanZhengzhouChina

Personalised recommendations