Skip to main content

Extracting Features of Interest from Small Deep Networks for Efficient Visual Tracking

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11164))

Abstract

Recent deep trackers have achieved impressive performance in visual tracking. Typically, these trackers apply complex deep networks with massive parameters for representing objects, which makes their deployment on resource-limited devices very challenging due to two reasons: (1) high computation complexity, and (2) high storage footprint. To address these two issues, this paper proposes a lightweight deep tracker to facilitate efficient tracking by using a small deep convolutional neural network. This tracker adaptively extracts features of interest from different deep layers for representing different objects, and then integrates into discriminative correlation filter formulation. Due to the usage of small deep networks and selection of deep features, the drop on tracking accuracy could be effectively alleviated, while the costs in computation and storage could be greatly reduced. This tracker can run at a very fast speed of 55 fps when only taking 4.8M parameters. Experimental results on the public OTB2013 and OTB100 benchmarks demonstrate the effectiveness and efficiency of the proposed tracker.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Bertinetto, L., Valmadre, J., Henriques, J.F., Vedaldi, A., Torr, P.H.S.: Fully-convolutional siamese networks for object tracking. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9914, pp. 850–865. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-48881-3_56

    Chapter  Google Scholar 

  2. Boddeti, V.N., Kanade, T., Kumar, B.V.K.V.: Correlation filters for object alignment. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2291–2298 (2013)

    Google Scholar 

  3. Bolme, D.S., Beveridge, J.R., Draper, B.A., Lui, Y.M.: Visual object tracking using adaptive correlation filters. In: IEEE Computer Vision and Pattern Recognition, pp. 2544–2550 (2010)

    Google Scholar 

  4. Danelljan, M., Hager, G., Khan, F.S., Felsberg, M.: Convolutional features for correlation filter based visual tracking. In: IEEE International Conference on Computer Vision Workshop, pp. 621–629 (2015)

    Google Scholar 

  5. Danelljan, M., Häger, G., Khan, F.S., Felsberg, M.: Accurate scale estimation for robust visual tracking. In: British Machine Vision Conference, pp. 65.1–65.11 (2014)

    Google Scholar 

  6. Danelljan, M., Robinson, A., Shahbaz Khan, F., Felsberg, M.: Beyond correlation filters: learning continuous convolution operators for visual tracking. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 472–488. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_29

    Chapter  Google Scholar 

  7. Felzenszwalb, P.F., Girshick, R., Mcallester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010)

    Article  Google Scholar 

  8. Held, D., Thrun, S., Savarese, S.: Learning to track at 100 FPS with deep regression networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 749–765. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_45

    Chapter  Google Scholar 

  9. Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)

    Article  Google Scholar 

  10. Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7575, pp. 702–715. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33765-9_50

    Chapter  Google Scholar 

  11. Huang, C.M., Wang, S.C., Chang, C.F., Huang, C.I.: An air combat simulator in the virtual reality with the visual tracking system and force-feedback components. In: IEEE International Conference on Control Applications, vol. 1, pp. 515–520 (2004)

    Google Scholar 

  12. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and \(<\)0.5MB model size (2016). arXiv: Computer Vision and Pattern Recognition

  13. Jian, M., Lam, K.M., Dong, J., Shen, L.: Visual-patch-attention-aware saliency detection. IEEE Trans. Cybern. 45(8), 1575–1586 (2015)

    Article  Google Scholar 

  14. Jian, M., Qi, Q., Dong, J., Sun, X., Sun, Y., Lam, K.: Saliency detection using quaternionic distance based weber local descriptor and level priors. Multimedia Tools and Applications, pp. 1–18 (2017)

    Google Scholar 

  15. Jian, M., Qi, Q., Dong, J., Yin, Y., Lam, K.M.: Integrating qdwd with pattern distinctness and local contrast for underwater saliency detection. J. Vis. Commun. Image Represent. 53, 31–41 (2018)

    Article  Google Scholar 

  16. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  17. Li, Y., Zhu, J.: A scale adaptive kernel correlation filter tracker with feature integration. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8926, pp. 254–265. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16181-5_18

    Chapter  Google Scholar 

  18. Ma, C., Huang, J.B., Yang, X., Yang, M.H.: Hierarchical convolutional features for visual tracking. In: IEEE International Conference on Computer Vision, pp. 3074–3082 (2015)

    Google Scholar 

  19. Matthias, M., Neil, S., Ghanem, B.: Context-aware correlation filter tracking. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  20. Nam, H., Han, B.: Learning multi-domain convolutional neural networks for visual tracking. In: Computer Vision and Pattern Recognition, pp. 4293–4302 (2016)

    Google Scholar 

  21. Papanikolopoulos, N.P., Khosla, P.K., Kanade, T.: Visual tracking of a moving target by a camera mounted on a robot: a combination of control and vision. IEEE Trans. Robot. Autom. 9(1), 14–35 (1993)

    Article  Google Scholar 

  22. Qi, Y., et al.: Hedged deep tracking. In: Computer Vision and Pattern Recognition, pp. 4303–4311 (2016)

    Google Scholar 

  23. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: International Conference on Neural Information Processing Systems, pp. 91–99 (2015)

    Google Scholar 

  24. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M.S.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  25. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Computer Science (2014)

    Google Scholar 

  26. Smeulders, A.W.M., Chu, D.M., Cucchiara, R., Calderara, S., Dehghan, A., Shah, M.: Visual tracking: an experimental survey. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1442–1468 (2014)

    Article  Google Scholar 

  27. Song, Y., Ma, C., Gong, L., Zhang, J., Lau, R.W.H., Yang, M.H.: Crest: convolutional residual learning for visual tracking. In: IEEE International Conference on Computer Vision, pp. 2574–2583 (2017)

    Google Scholar 

  28. Valmadre, J., Bertinetto, L., Henriques, J.F., Vedaldi, A., Torr, P.H.S.: End-to-end representation learning for correlation filter based tracking. In: IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  29. Wu, Y., Lim, J., Yang, M.H.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015)

    Article  Google Scholar 

  30. Wu, Y., Lim, J., Yang, M.H.: Online object tracking: A benchmark. In: IEEE Computer Vision and Pattern Recognition, pp. 2411–2418 (2013)

    Google Scholar 

Download references

Acknowledgment

This work is supported in part by the National Key Research and Development Plan (2016YFC0801005), the National Natural Science Foundation of China (61772513) and the International Cooperation Project of Institute of Information Engineering, Chinese Academy of Sciences (Y7Z0511101). Shiming Ge is also supported by Youth Innovation Promotion Association, CAS.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shiming Ge .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Luo, Z., Ge, S., Hua, Y., Liu, H., Jin, X. (2018). Extracting Features of Interest from Small Deep Networks for Efficient Visual Tracking. In: Hong, R., Cheng, WH., Yamasaki, T., Wang, M., Ngo, CW. (eds) Advances in Multimedia Information Processing – PCM 2018. PCM 2018. Lecture Notes in Computer Science(), vol 11164. Springer, Cham. https://doi.org/10.1007/978-3-030-00776-8_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-00776-8_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-00775-1

  • Online ISBN: 978-3-030-00776-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics