Advertisement

TA-CFNet: A New CFNet with Target Aware for Object Tracking

  • Jiejie Zhao
  • Yongzhao ZhanEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11901)

Abstract

A new network constructed by the combination of Siamese network and correlation filter has achieved enormous popularity because Siamese networks obtain high accuracy and correlation filters provide amazing speed. How to tackle boundary effects problems brought by filters, to fuse CF layers with multi-layers of CNN is an essential problem of the new network. Most papers deal with the problem by simply adding cosine window to every image. However, if the target is too small for bounding box to include background information or if target is too big for bounding box to loss partial information. In this paper, a new CFNet with target aware (TA-CFNet) for object tracking is proposed. TA-CFNet intergrates current target position and feature weight map to form target likelihood matrix. This target likelihood matrix is used to optimize and update the correlation filter, so that the template object of the deep tracking network is framed as accurately as possible. Experimental results on OTB benchmarks for visual tracking demonstrate that our proposed method outperforms other trackers in deep learning.

Keywords

Object-tracking Target-likelihood matrix Correlation filter Siamese network 

Notes

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61672268) and the Primary Research & Development Plan of Jiangsu Province (No. BE2015137).

References

  1. 1.
    Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O., Torr, P.H.S.: Staple: complementary learners for real-time tracking. In Computer Vision & Pattern Recognition (2016)Google Scholar
  2. 2.
    Bertinetto, L., Valmadre, J., Henriques, J.F., Vedaldi, A., Torr, P.H.S.: Fully-convolutional siamese networks for object tracking. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9914, pp. 850–865. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-48881-3_56CrossRefGoogle Scholar
  3. 3.
    Choi, J., Jin Chang, H., Jeong, J., Demiris, Y., Young Choi, J.: Visual tracking using attention-modulated disintegration and integration. In: Computer Vision and Pattern Recognition (2016)Google Scholar
  4. 4.
    Choi, J., Jin Chang, H., Yun, S., Fischer, T., Young Choi, J.: Attentional correlation filter network for adaptive visual tracking. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)Google Scholar
  5. 5.
    Danelljan, M., Bhat, G., Shahbaz Khan, F., Felsberg, M.: Eco: efficient convolution operators for tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6638–6646 (2017)Google Scholar
  6. 6.
    Danelljan, M., Häger, G., Khan, F., Felsberg, M.: Accurate scale estimation for robust visual tracking. In British Machine Vision Conference, Nottingham, 1–5 September 2014. BMVA Press (2014)Google Scholar
  7. 7.
    Danelljan, M., Häger, G., Shahbaz Khan, F., Felsberg, M.: Learning spatially regularized correlation filters for visual tracking (2016)Google Scholar
  8. 8.
    Danelljan, M., Robinson, A., Shahbaz Khan, F., Felsberg, M.: Beyond correlation filters: learning continuous convolution operators for visual tracking. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 472–488. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46454-1_29CrossRefGoogle Scholar
  9. 9.
    Kiani Galoogahi, H., Fagg, A., Lucey, S.: Learning background-aware correlation filters for visual tracking (2017)Google Scholar
  10. 10.
    Kiani Galoogahi, H., Sim, T., Lucey, S.: Correlation filters with limited boundaries. In: Computer Vision & Pattern Recognition (2015)Google Scholar
  11. 11.
    Held, D., Thrun, S., Savarese, S.: Learning to track at 100 FPS with deep regression networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 749–765. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_45CrossRefGoogle Scholar
  12. 12.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)CrossRefGoogle Scholar
  13. 13.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7575, pp. 702–715. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33765-9_50CrossRefGoogle Scholar
  14. 14.
    Hong, S., You, T., Kwak, S., Han, B.: Online tracking by learning discriminative saliency map with convolutional neural network. In: International Conference on International Conference on Machine Learning (2015)Google Scholar
  15. 15.
    Hsu, L.C., Chen, H.M.: On optimizing scan testing power and routing cost in scan chain design. In: International Symposium on Quality Electronic Design (2006)Google Scholar
  16. 16.
    Kass, M., Witkin, A., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vis. 1(4), 321–331 (1988)CrossRefGoogle Scholar
  17. 17.
    Li, B., Yan, J., Wu, W., Zhu, Z., Hu, X.: High performance visual tracking with Siamese region proposal network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8971–8980 (2018)Google Scholar
  18. 18.
    Luo, W.: Multiple object tracking: a literature review. arXiv preprint arXiv:1409.7618 (2014)
  19. 19.
    Ma, C., Yang, X., Zhang, C., Yang, M.H.: Long-term correlation tracking. In: Computer Vision & Pattern Recognition (2015)Google Scholar
  20. 20.
    Nam, H., Han, B.: Learning multi-domain convolutional neural networks for visual tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4293–4302 (2016)Google Scholar
  21. 21.
    Nummiaro, K., Koller-Meier, E., Van Gool, L.: Object tracking with an adaptive color-based particle filter. In: Van Gool, L. (ed.) DAGM 2002. LNCS, vol. 2449, pp. 353–360. Springer, Heidelberg (2002).  https://doi.org/10.1007/3-540-45783-6_43CrossRefGoogle Scholar
  22. 22.
    Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2014)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Valmadre, J., Bertinetto, L., Henriques, J., Vedaldi, A., Torr, P.H.S.: End-to-end representation learning for correlation filter based tracking (2017)Google Scholar
  24. 24.
    Wang, N., Yeung, D.-Y.: Learning a deep compact image representation for visual tracking. In: Advances in Neural Information Processing Systems, pp. 809–817 (2013)Google Scholar
  25. 25.
    Wu, Y., Lim, J., Yang, M.-H.: Online object tracking: a benchmark. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2411–2418 (2013)Google Scholar
  26. 26.
    Wu, Y., Lim, J., Yang, M.H.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015)CrossRefGoogle Scholar
  27. 27.
    Li, Y., Zhu, J.: A scale adaptive kernel correlation filter tracker with feature integration. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8926, pp. 254–265. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-16181-5_18CrossRefGoogle Scholar
  28. 28.
    Zhang, K., Zhang, L., Liu, Q., Zhang, D., Yang, M.-H.: Fast visual tracking via dense spatio-temporal context learning. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 127–141. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_9CrossRefGoogle Scholar
  29. 29.
    Zheng, Z., Wei, W., Wei, Z., Yan, J.: End-to-end flow correlation tracking with spatial-temporal attention (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Computer Science and Communication EngineeringJiangsu UniversityZhenjiangChina

Personalised recommendations