Advertisement

The Sixth Visual Object Tracking VOT2018 Challenge Results

  • Matej KristanEmail author
  • Aleš Leonardis
  • Jiří Matas
  • Michael Felsberg
  • Roman Pflugfelder
  • Luka Čehovin Zajc
  • Tomáš Vojír̃
  • Goutam Bhat
  • Alan Lukežič
  • Abdelrahman Eldesokey
  • Gustavo Fernández
  • Álvaro García-Martín
  • Álvaro Iglesias-Arias
  • A. Aydin Alatan
  • Abel González-García
  • Alfredo Petrosino
  • Alireza Memarmoghadam
  • Andrea Vedaldi
  • Andrej Muhič
  • Anfeng He
  • Arnold Smeulders
  • Asanka G. Perera
  • Bo Li
  • Boyu Chen
  • Changick Kim
  • Changsheng Xu
  • Changzhen Xiong
  • Cheng Tian
  • Chong Luo
  • Chong Sun
  • Cong Hao
  • Daijin Kim
  • Deepak Mishra
  • Deming Chen
  • Dong Wang
  • Dongyoon Wee
  • Efstratios Gavves
  • Erhan Gundogdu
  • Erik Velasco-Salido
  • Fahad Shahbaz Khan
  • Fan Yang
  • Fei Zhao
  • Feng Li
  • Francesco Battistone
  • George De Ath
  • Gorthi R. K. S. Subrahmanyam
  • Guilherme Bastos
  • Haibin Ling
  • Hamed Kiani Galoogahi
  • Hankyeol Lee
  • Haojie Li
  • Haojie Zhao
  • Heng Fan
  • Honggang Zhang
  • Horst Possegger
  • Houqiang Li
  • Huchuan Lu
  • Hui Zhi
  • Huiyun Li
  • Hyemin Lee
  • Hyung Jin Chang
  • Isabela Drummond
  • Jack Valmadre
  • Jaime Spencer Martin
  • Javaan Chahl
  • Jin Young Choi
  • Jing Li
  • Jinqiao Wang
  • Jinqing Qi
  • Jinyoung Sung
  • Joakim Johnander
  • Joao Henriques
  • Jongwon Choi
  • Joost van de Weijer
  • Jorge Rodríguez Herranz
  • José M. Martínez
  • Josef Kittler
  • Junfei Zhuang
  • Junyu Gao
  • Klemen Grm
  • Lichao Zhang
  • Lijun Wang
  • Lingxiao Yang
  • Litu Rout
  • Liu Si
  • Luca Bertinetto
  • Lutao Chu
  • Manqiang Che
  • Mario Edoardo Maresca
  • Martin Danelljan
  • Ming-Hsuan Yang
  • Mohamed Abdelpakey
  • Mohamed Shehata
  • Myunggu Kang
  • Namhoon Lee
  • Ning Wang
  • Ondrej Miksik
  • P. Moallem
  • Pablo Vicente-Moñivar
  • Pedro Senna
  • Peixia Li
  • Philip Torr
  • Priya Mariam Raju
  • Qian Ruihe
  • Qiang Wang
  • Qin Zhou
  • Qing Guo
  • Rafael Martín-Nieto
  • Rama Krishna Gorthi
  • Ran Tao
  • Richard Bowden
  • Richard Everson
  • Runling Wang
  • Sangdoo Yun
  • Seokeon Choi
  • Sergio Vivas
  • Shuai Bai
  • Shuangping Huang
  • Sihang Wu
  • Simon Hadfield
  • Siwen Wang
  • Stuart Golodetz
  • Tang Ming
  • Tianyang Xu
  • Tianzhu Zhang
  • Tobias Fischer
  • Vincenzo Santopietro
  • Vitomir Štruc
  • Wang Wei
  • Wangmeng Zuo
  • Wei Feng
  • Wei Wu
  • Wei Zou
  • Weiming Hu
  • Wengang Zhou
  • Wenjun Zeng
  • Xiaofan Zhang
  • Xiaohe Wu
  • Xiao-Jun Wu
  • Xinmei Tian
  • Yan Li
  • Yan Lu
  • Yee Wei Law
  • Yi Wu
  • Yiannis Demiris
  • Yicai Yang
  • Yifan Jiao
  • Yuhong Li
  • Yunhua Zhang
  • Yuxuan Sun
  • Zheng Zhang
  • Zheng Zhu
  • Zhen-Hua Feng
  • Zhihui Wang
  • Zhiqun He
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11129)

Abstract

The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).

Notes

Acknowledgements

This work was supported in part by the following research programs and projects: Slovenian research agency research programs P2-0214, P2-0094, Slovenian research agency project J2-8175. Jiři Matas and Tomáš Vojír̃ were supported by the Czech Science Foundation Project GACR P103/12/G084. Michael Felsberg and Gustav Häger were supported by WASP, VR (EMC2), SSF (SymbiCloud), and SNIC. Roman Pflugfelder and Gustavo Fernández were supported by the AIT Strategic Research Programme 2017 Visual Surveillance and Insight. The challenge was sponsored by Faculty of Computer Science, University of Ljubljana, Slovenia.

References

  1. 1.
    Abdelpakey, M.H., Shehata, M.S., Mohamed, M.M.: Denssiam: End-to-end densely-siamese network with self-attention model for object tracking. arXiv:1809.02714, September 2018
  2. 2.
    Atkinson, R.C., Shiffrin, R.M.: Human memory: a proposed system and its control processes1. Psychol. Learn. Motiv. 2, 89–195 (1968)CrossRefGoogle Scholar
  3. 3.
    Babenko, B., Yang, M.H., Belongie, S.: Robust object tracking with online multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1619–1632 (2011)CrossRefGoogle Scholar
  4. 4.
    Bao, C., Wu, Y., Ling, H., Ji, H.: Real time robust L1 tracker using accelerated proximal gradient approach. In: CVPR (2012)Google Scholar
  5. 5.
    Battistone, F., Petrosino, A., Santopietro, V.: Watch out: embedded video tracking with BST for unmanned aerial vehicles. J. Sig. Process. Syst. 90(6), 891–900 (2018).  https://doi.org/10.1007/s11265-017-1279-xCrossRefGoogle Scholar
  6. 6.
    Bertinetto, L., Valmadre, J., Henriques, J.F., Vedaldi, A., Torr, P.H.S.: Fully-convolutional siamese networks for object tracking. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9914, pp. 850–865. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-48881-3_56CrossRefGoogle Scholar
  7. 7.
    Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O., Torr, P.H.S.: Staple: complementary learners for real-time tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1401–1409 (2016)Google Scholar
  8. 8.
    Bhat, G., Johnander, J., Danelljan, M., Khan, F.S., Felsberg, M.: Unveiling the power of deep tracking. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 493–509. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01216-8_30CrossRefGoogle Scholar
  9. 9.
    Bolme, D.S., Beveridge, J.R., Draper, B.A., Lui, Y.M.: Visual object tracking using adaptive correlation filters. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2010)Google Scholar
  10. 10.
    Čehovin, L.: TraX: the visual tracking eXchange protocol and library. Neurocomputing 260, 5–8 (2017).  https://doi.org/10.1016/j.neucom.2017.02.036CrossRefGoogle Scholar
  11. 11.
    Chatfield, K., Simonyan, K., Vedaldi, A., Zisserman, A.: Return of the devil in the details: delving deep into convolutional nets. In: BMVC (2014)Google Scholar
  12. 12.
    Danelljan, M., Häger, G., Khan, F.S., Felsberg, M.: Accurate scale estimation for robust visual tracking. In: Proceedings of the British Machine Vision Conference BMVC (2014)Google Scholar
  13. 13.
    Danelljan, M., Häger, G., Khan, F.S., Felsberg, M.: Learning spatially regularized correlation filters for visual tracking. In: International Conference on Computer Vision (2015)Google Scholar
  14. 14.
    Danelljan, M., Khan, F.S., Felsberg, M., Van de Weijer, J.: Adaptive color attributes for real-time visual tracking. In: Computer Vision and Pattern Recognition (2014)Google Scholar
  15. 15.
    Danelljan, M., Bhat, G., Khan, F.S., Felsberg, M.: ECO: efficient convolution operators for tracking. In: CVPR (2017)Google Scholar
  16. 16.
    Danelljan, M., Häger, G., Khan, F.S., Felsberg, M.: Discriminative scale space tracking. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1561–1575 (2016)CrossRefGoogle Scholar
  17. 17.
    Danelljan, M., Robinson, A., Shahbaz Khan, F., Felsberg, M.: Beyond correlation filters: learning continuous convolution operators for visual tracking. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 472–488. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46454-1_29CrossRefGoogle Scholar
  18. 18.
    De Ath, G., Everson, R.: Part-Based Tracking by Sampling. arXiv:1805.08511, May 2018
  19. 19.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: CVPR (2009)Google Scholar
  20. 20.
    Fan, H., Ling, H.: Parallel tracking and verifying: a framework for real-time and high accuracy visual tracking. In: ICCV (2017)Google Scholar
  21. 21.
    Felsberg, M., Berg, A., Häger, G., Ahlberg, J., et al.: The thermal infrared visual object tracking VOT-TIR2015 challenge results. In: ICCV 2015 Workshop Proceedings, VOT2015 Workshop (2015)Google Scholar
  22. 22.
    Galoogahi, H.K., Fagg, A., Huang, C., Ramanan, D., Lucey, S.: Need for speed: a benchmark for higher frame rate object tracking. CoRR abs/1703.05884 (2017). http://arxiv.org/abs/1703.05884
  23. 23.
    Galoogahi, H.K., Fagg, A., Lucey, S.: Learning background-aware correlation filters for visual tracking. In: ICCV, pp. 1144–1152 (2017)Google Scholar
  24. 24.
    González, A., Martín-Nieto, R., Bescós, J., Martínez, J.M.: Single object long-term tracker for smart control of a PTZ camera. In: Proceedings of the International Conference on Distributed Smart Cameras, p. 39. ACM (2014)Google Scholar
  25. 25.
    Goyette, N., Jodoin, P.M., Porikli, F., Konrad, J., Ishwar, P.: Changedetection.net: a new change detection benchmark dataset. In: CVPR Workshops, pp. 1–8. IEEE (2012)Google Scholar
  26. 26.
    Gundogdu, E., Alatan, A.A.: Good features to correlate for visual tracking. IEEE Trans. Image Process. 27(5), 2526–2540 (2018).  https://doi.org/10.1109/TIP.2018.2806280MathSciNetCrossRefGoogle Scholar
  27. 27.
    Guo, Q., Feng, W., Zhou, C., Huang, R., Wan, L., Wang, S.: Learning dynamic Siamese network for visual object tracking. In: ICCV (2017)Google Scholar
  28. 28.
    Hare, S., Saffari, A., Torr, P.H.S.: Struck: structured output tracking with kernels. In: Metaxas, D.N., Quan, L., Sanfeliu, A., Gool, L.J.V. (eds.) International Conference on Computer Vision, pp. 263–270. IEEE (2011)Google Scholar
  29. 29.
    He, A., Luo, C., Tian, X., Zeng, W.: Towards a better match in siamese network based visual object tracker. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018 Workshops. LNCS, vol. 11129, pp. 132–147. Springer, Cham (2019)Google Scholar
  30. 30.
    He, A., Luo, C., Tian, X., Zeng, W.: A twofold siamese network for real-time object tracking. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018Google Scholar
  31. 31.
    He, Z., Fan, Y., Zhuang, J., Dong, Y., Bai, H.: Correlation filters with weighted convolution responses. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1992–2000 (2017)Google Scholar
  32. 32.
    Henriques, J., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. PAMI 37(3), 583–596 (2015)CrossRefGoogle Scholar
  33. 33.
    Herranz, J.R.: Short-term single target tracking with discriminative correlation filters. Master thesis, University of Ljubljana/Technical University of Madrid (2018)Google Scholar
  34. 34.
    Hong, Z., Chen, Z., Wang, C., Mei, X., Prokhorov, D., Tao, D.: Multi-store tracker (muster): a cognitive psychology inspired approach to object tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 749–758 (2015)Google Scholar
  35. 35.
    Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
  36. 36.
    Jack, V., et al.: Long-term tracking in the wild: a benchmark. arXiv:1803.09502 (2018)
  37. 37.
    Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking-learning-detection. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 34(7), 1409–1422 (2012).  https://doi.org/10.1109/TPAMI.2011.239CrossRefGoogle Scholar
  38. 38.
    Kristan, M., et al.: The visual object tracking vot2017 challenge results. In: ICCV 2017 Workshops, Workshop on Visual Object Tracking Challenge (2017)Google Scholar
  39. 39.
    Kristan, M., et al.: The visual object tracking VOT2016 challenge results. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9914, pp. 777–823. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-48881-3_54CrossRefGoogle Scholar
  40. 40.
    Kristan, M., et al.: The visual object tracking vot2015 challenge results. In: ICCV 2015 Workshops, Workshop on Visual Object Tracking Challenge (2015)Google Scholar
  41. 41.
    Kristan, M., et al.: The visual object tracking vot2013 challenge results. In: ICCV 2013 Workshops, Workshop on Visual Object Tracking Challenge, pp. 98–111 (2013)Google Scholar
  42. 42.
    Kristan, M., et al.: The visual object tracking VOT2014 challenge results. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8926, pp. 191–217. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-16181-5_14CrossRefGoogle Scholar
  43. 43.
    Kristan, M., et al.: A novel performance evaluation methodology for single-target trackers. IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2137–2155 (2016)CrossRefGoogle Scholar
  44. 44.
    Leal-Taixé, L., Milan, A., Reid, I.D., Roth, S., Schindler, K.: Motchallenge 2015: Towards a benchmark for multi-target tracking. CoRR abs/1504.01942 (2015). http://arxiv.org/abs/1504.01942
  45. 45.
    Lebeda, K., Bowden, R., Matas, J.: Long-term tracking through failure cases. In: Visual Object Tracking Challenge VOT2013, in Conjunction with ICCV2013 (2013)Google Scholar
  46. 46.
    Lee, H., Kim, D.: Salient region-based online object tracking. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1170–1177. IEEE (2018)Google Scholar
  47. 47.
    Li, A., Li, M., Wu, Y., Yang, M.H., Yan, S.: Nus-pro: a new visual tracking challenge. IEEE-PAMI 38, 335–349 (2015)CrossRefGoogle Scholar
  48. 48.
    Li, B., Yan, J., Wu, W., Zhu, Z., Hu, X.: High performance visual tracking with siamese region proposal network. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018Google Scholar
  49. 49.
    Li, F., Tian, C., Zuo, W., Zhang, L., Yang, M.H.: Learning spatial-temporal regularized correlation filters for visual tracking. In: CVPR (2018)Google Scholar
  50. 50.
    Li, Y., Zhu, J., Song, W., Wang, Z., Liu, H., Hoi, S.C.H.: Robust estimation of similarity transformation for visual object tracking with correlation filters (2017)Google Scholar
  51. 51.
    Liang, P., Blasch, E., Ling, H.: Encoding color information for visual tracking: algorithms and benchmark. IEEE Trans. Image Process. 24(12), 5630–5644 (2015)MathSciNetCrossRefGoogle Scholar
  52. 52.
    Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  53. 53.
    Lukežič, A., Vojír̃ T., Zajc, L.Č., Matas, J., Kristan, M.: Discriminative correlation filter with channel and spatial reliability. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6309–6318, July 2017Google Scholar
  54. 54.
    Lukežič, A., Zajc, L.Č., Kristan, M.: Deformable parts correlation filters for robust visual tracking. IEEE Trans. Cybern. PP(99), 1–13 (2017)Google Scholar
  55. 55.
    Lukezic, A., Zajc, L.C., Vojír, T., Matas, J., Kristan, M.: FCLT - A fully-correlational long-term tracker. CoRR abs/1711.09594 (2017). http://arxiv.org/abs/1711.09594
  56. 56.
    Lukezic, A., Zajc, L.C., Vojír, T., Matas, J., Kristan, M.: Now you see me: evaluating performance in long-term visual tracking. CoRR abs/1804.07056 (2018). http://arxiv.org/abs/1804.07056
  57. 57.
    Ma, C., Yang, X., Zhang, C., Yang, M.H.: Long-term correlation tracking. In: CVPR (2015)Google Scholar
  58. 58.
    Maresca, M.E., Petrosino, A.: Clustering local motion estimates for robust and efficient object tracking. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8926, pp. 244–253. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-16181-5_17CrossRefGoogle Scholar
  59. 59.
    Maresca, M.E., Petrosino, A.: MATRIOSKA: a multi-level approach to fast tracking by learning. In: Petrosino, A. (ed.) ICIAP 2013. LNCS, vol. 8157, pp. 419–428. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-41184-7_43CrossRefGoogle Scholar
  60. 60.
    Moudgil, A., Gandhi, V.: Long-term visual object tracking benchmark. arXiv preprint arXiv:1712.01358 (2017)
  61. 61.
    Mueller, M., Smith, N., Ghanem, B.: A benchmark and simulator for UAV tracking. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 445–461. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_27CrossRefGoogle Scholar
  62. 62.
    Müller, M., Bibi, A., Giancola, S., Al-Subaihi, S., Ghanem, B.: Trackingnet: A large-scale dataset and benchmark for object tracking in the wild. CoRR abs/1803.10794 (2018). http://arxiv.org/abs/1803.10794
  63. 63.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR, vol. 1, pp. 886–893, June 2005Google Scholar
  64. 64.
    Nam, H., Han, B.: Learning multi-domain convolutional neural networks for visual tracking. In: CVPR, pp. 4293–4302 (2016)Google Scholar
  65. 65.
    Nebehay, G., Pflugfelder, R.: Clustering of Static-Adaptive correspondences for deformable object tracking. In: Computer Vision and Pattern Recognition. IEEE (2015)Google Scholar
  66. 66.
    Pernici, F., del Bimbo, A.: Object tracking by oversampling local features. IEEE Trans. Pattern Anal. Mach. Intell. 36(12), 2538–2551 (2013).  https://doi.org/10.1109/TPAMI.2013.250CrossRefGoogle Scholar
  67. 67.
    Phillips, P.J., Moon, H., Rizvi, S.A., Rauss, P.J.: The feret evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1090–1104 (2000)CrossRefGoogle Scholar
  68. 68.
    Possegger, H., Mauthner, T., Bischof, H.: In defense of color-based model-free tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)Google Scholar
  69. 69.
    Real, E., Shlens, J., Mazzocchi, S., Pan, X., Vanhoucke, V.: YouTube-BoundingBoxes: a large high-precision human-annotated data set for object detection in video. In: Computer Vision and Pattern Recognition, pp. 7464–7473 (2017)Google Scholar
  70. 70.
    Ross, D.A., Lim, J., Lin, R.S., Yang, M.H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(1–3), 125–141 (2008)CrossRefGoogle Scholar
  71. 71.
    Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. IJCV 115(3), 211–252 (2015).  https://doi.org/10.1007/s11263-015-0816-yMathSciNetCrossRefGoogle Scholar
  72. 72.
    Senna, P., Drummond, I.N., Bastos, G.S.: Real-time ensemble-based tracker with kalman filter. In: 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 338–344, October 2017.  https://doi.org/10.1109/SIBGRAPI.2017.51
  73. 73.
    Shi, J., Tomasi, C.: Good features to track. In: Computer Vision and Pattern Recognition, pp. 593–600, June 1994Google Scholar
  74. 74.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  75. 75.
    Smeulders, A.W.M., Chu, D.M., Cucchiara, R., Calderara, S., Dehghan, A., Shah, M.: Visual tracking: an experimental survey. TPAMI 36, 1442–1468 (2013).  https://doi.org/10.1109/TPAMI.2013.230CrossRefGoogle Scholar
  76. 76.
    Solera, F., Calderara, S., Cucchiara, R.: Towards the evaluation of reproducible robustness in tracking-by-detection. In: Advanced Video and Signal Based Surveillance, pp. 1–6 (2015)Google Scholar
  77. 77.
    Sun, C., Wang, D., Lu, H., Yang, M.H.: Correlation tracking via joint discrimination and reliability learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 489–497 (2018)Google Scholar
  78. 78.
    Sun, C., Wang, D., Lu, H., Yang, M.H.: Learning spatial-aware regressions for visual tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8962–8970 (2018)Google Scholar
  79. 79.
    Tao, R., Gavves, E., Smeulders, A.W.M.: Tracking for half an hour. CoRR abs/1711.10217 (2017). http://arxiv.org/abs/1711.10217
  80. 80.
    Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 4489–4497, December 2015.  https://doi.org/10.1109/ICCV.2015.510
  81. 81.
    Valmadre, J., Bertinetto, L., Henriques, J.F., Vedaldi, A., Torr, P.H.: End-to-end representation learning for correlation filter based tracking. arXiv preprint arXiv:1704.06036 (2017)
  82. 82.
    Čehovin, L., Kristan, M., Leonardis, A.: Robust visual tracking using an adaptive coupled-layer visual model. IEEE Trans. Pattern Anal. Mach. Intell. 35(4), 941–953 (2013).  https://doi.org/10.1109/TPAMI.2012.145CrossRefGoogle Scholar
  83. 83.
    Čehovin, L., Leonardis, A., Kristan, M.: Visual object tracking performance measures revisited. IEEE Trans. Image Process. 25(3), 1261–1274 (2015)MathSciNetGoogle Scholar
  84. 84.
    Čehovin, L., Leonardis, A., Kristan, M.: Robust visual tracking using template anchors. In: WACV. IEEE, March 2016Google Scholar
  85. 85.
    Velasco-Salido, E., Martínez, J.M.: Scale adaptive point-based kanade lukas tomasi colour-filter tracker. Under Review (2017)Google Scholar
  86. 86.
    Vojíř, T., Matas, J.: The enhanced flock of trackers. In: Cipolla, R., Battiato, S., Farinella, G.M. (eds.) Registration and Recognition in Images and Videos. SCI, vol. 532, pp. 113–136. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-642-44907-9_6CrossRefGoogle Scholar
  87. 87.
    Vojír̃, T., Noskova, J., Matas. J.: Robust scale-adaptive mean-shift for tracking. Pattern Recognit. Lett. 49, 250–258 (2014)Google Scholar
  88. 88.
    Wang, Q., Gao, J., Xing, J., Zhang, M., Hu, W.: DCFNet: discriminant correlation filters network for visual tracking. arXiv preprint arXiv:1704.04057 (2017)
  89. 89.
    Van de Weijer, J., Schmid, C., Verbeek, J., Larlus, D.: Learning color names for real-world applications. IEEE Trans. Image Process. 18(7), 1512–1523 (2009)MathSciNetCrossRefGoogle Scholar
  90. 90.
    Wu, C., Zhu, J., Zhang, J., Chen, C., Cai, D.: A convolutional treelets binary feature approach to fast keypoint recognition. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7576, pp. 368–382. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33715-4_27CrossRefGoogle Scholar
  91. 91.
    Wu, Y., Lim, J., Yang, M.H.: Online object tracking: a benchmark. In: Computer Vision and Pattern Recognition (2013)Google Scholar
  92. 92.
    Wu, Y., Lim, J., Yang, M.H.: Object tracking benchmark. PAMI 37(9), 1834–1848 (2015)CrossRefGoogle Scholar
  93. 93.
    Xu, T., Feng, Z.H., Wu, X.J., Kittler, J.: Learning adaptive discriminative correlation filters via temporal consistency preserving spatial feature selection for robust visual tracking. arXiv preprint arXiv:1807.11348 (2018)
  94. 94.
    Yiming, L., Shen, J., Pantic, M.: Mobile face tracking: a survey and benchmark. arXiv:1805.09749v1 (2018)
  95. 95.
    Young, D.P., Ferryman, J.M.: PETS metrics: on-line performance evaluation service. In: ICCCN 2005 Proceedings of the 14th International Conference on Computer Communications and Networks, pp. 317–324 (2005)Google Scholar
  96. 96.
    Zajc, L.Č., Lukežič, A., Leonardis, A., Kristan, M.: Beyond standard benchmarks: parameterizing performance evaluation in visual object tracking. ICCV abs/1612.00089 (2017). http://arxiv.org/abs/1612.00089
  97. 97.
    Zhang, J., Ma, S., Sclaroff, S.: MEEM: robust tracking via multiple experts using entropy minimization. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 188–203. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10599-4_13CrossRefGoogle Scholar
  98. 98.
    Zhang, T., Xu, C., Yang, M.H.: Learning multi-task correlation particle filters for visual tracking. IEEE Trans. Pattern Anal. Mach. Intell. 1–14 (2018)Google Scholar
  99. 99.
    Zhang, Z., Li, Y., Ren, J., Zhu, J.: Effective occlusion handling for fast correlation filter-based trackers (2018)Google Scholar
  100. 100.
    Zhu, G., Porikli, F., Li, H.: Tracking randomly moving objects on edge box proposals (2015)Google Scholar
  101. 101.
    Zhu, P., Wen, L., Bian, X., Haibin, L., Hu, Q.: Vision meets drones: a challenge. arXiv preprint arXiv:1804.07437 (2018)
  102. 102.
    Zhu, Z., Wang, Q., Li, B., Wu, W., Yan, J., Hu, W.: Distractor-aware siamese networks for visual object tracking. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11213, pp. 103–119. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01240-3_7CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Matej Kristan
    • 1
    Email author
  • Aleš Leonardis
    • 2
  • Jiří Matas
    • 3
  • Michael Felsberg
    • 4
  • Roman Pflugfelder
    • 5
    • 6
  • Luka Čehovin Zajc
    • 1
  • Tomáš Vojír̃
    • 3
  • Goutam Bhat
    • 4
  • Alan Lukežič
    • 1
  • Abdelrahman Eldesokey
    • 4
  • Gustavo Fernández
    • 5
  • Álvaro García-Martín
    • 44
  • Álvaro Iglesias-Arias
    • 44
  • A. Aydin Alatan
    • 28
  • Abel González-García
    • 47
  • Alfredo Petrosino
    • 54
  • Alireza Memarmoghadam
    • 53
  • Andrea Vedaldi
    • 55
  • Andrej Muhič
    • 1
  • Anfeng He
    • 27
  • Arnold Smeulders
    • 48
  • Asanka G. Perera
    • 57
  • Bo Li
    • 7
  • Boyu Chen
    • 13
  • Changick Kim
    • 24
  • Changsheng Xu
    • 30
  • Changzhen Xiong
    • 9
  • Cheng Tian
    • 16
  • Chong Luo
    • 27
  • Chong Sun
    • 13
  • Cong Hao
    • 52
  • Daijin Kim
    • 34
  • Deepak Mishra
    • 19
  • Deming Chen
    • 52
  • Dong Wang
    • 13
  • Dongyoon Wee
    • 31
  • Efstratios Gavves
    • 48
  • Erhan Gundogdu
    • 14
  • Erik Velasco-Salido
    • 44
  • Fahad Shahbaz Khan
    • 4
  • Fan Yang
    • 42
  • Fei Zhao
    • 32
    • 50
  • Feng Li
    • 16
  • Francesco Battistone
    • 26
  • George De Ath
    • 51
  • Gorthi R. K. S. Subrahmanyam
    • 19
  • Guilherme Bastos
    • 45
  • Haibin Ling
    • 42
  • Hamed Kiani Galoogahi
    • 35
  • Hankyeol Lee
    • 24
  • Haojie Li
    • 40
  • Haojie Zhao
    • 13
  • Heng Fan
    • 42
  • Honggang Zhang
    • 10
  • Horst Possegger
    • 15
  • Houqiang Li
    • 56
  • Huchuan Lu
    • 13
  • Hui Zhi
    • 9
  • Huiyun Li
    • 39
  • Hyemin Lee
    • 34
  • Hyung Jin Chang
    • 2
  • Isabela Drummond
    • 45
  • Jack Valmadre
    • 55
  • Jaime Spencer Martin
    • 58
  • Javaan Chahl
    • 57
  • Jin Young Choi
    • 37
  • Jing Li
    • 12
  • Jinqiao Wang
    • 32
    • 50
  • Jinqing Qi
    • 13
  • Jinyoung Sung
    • 31
  • Joakim Johnander
    • 4
  • Joao Henriques
    • 55
  • Jongwon Choi
    • 37
  • Joost van de Weijer
    • 47
  • Jorge Rodríguez Herranz
    • 1
    • 41
  • José M. Martínez
    • 44
  • Josef Kittler
    • 58
  • Junfei Zhuang
    • 8
    • 10
  • Junyu Gao
    • 30
  • Klemen Grm
    • 1
  • Lichao Zhang
    • 47
  • Lijun Wang
    • 13
  • Lingxiao Yang
    • 17
  • Litu Rout
    • 19
  • Liu Si
    • 22
  • Luca Bertinetto
    • 55
  • Lutao Chu
    • 39
    • 50
  • Manqiang Che
    • 9
  • Mario Edoardo Maresca
    • 54
  • Martin Danelljan
    • 4
  • Ming-Hsuan Yang
    • 49
  • Mohamed Abdelpakey
    • 25
  • Mohamed Shehata
    • 25
  • Myunggu Kang
    • 31
  • Namhoon Lee
    • 55
  • Ning Wang
    • 56
  • Ondrej Miksik
    • 55
  • P. Moallem
    • 53
  • Pablo Vicente-Moñivar
    • 44
  • Pedro Senna
    • 46
  • Peixia Li
    • 13
  • Philip Torr
    • 55
  • Priya Mariam Raju
    • 19
  • Qian Ruihe
    • 22
  • Qiang Wang
    • 30
  • Qin Zhou
    • 38
  • Qing Guo
    • 43
  • Rafael Martín-Nieto
    • 44
  • Rama Krishna Gorthi
    • 19
  • Ran Tao
    • 48
  • Richard Bowden
    • 58
  • Richard Everson
    • 51
  • Runling Wang
    • 33
  • Sangdoo Yun
    • 37
  • Seokeon Choi
    • 24
  • Sergio Vivas
    • 44
  • Shuai Bai
    • 8
    • 10
  • Shuangping Huang
    • 40
  • Sihang Wu
    • 40
  • Simon Hadfield
    • 58
  • Siwen Wang
    • 13
  • Stuart Golodetz
    • 55
  • Tang Ming
    • 32
    • 50
  • Tianyang Xu
    • 23
  • Tianzhu Zhang
    • 30
  • Tobias Fischer
    • 18
  • Vincenzo Santopietro
    • 54
  • Vitomir Štruc
    • 1
  • Wang Wei
    • 11
  • Wangmeng Zuo
    • 16
  • Wei Feng
    • 43
  • Wei Wu
    • 36
  • Wei Zou
    • 21
  • Weiming Hu
    • 30
  • Wengang Zhou
    • 56
  • Wenjun Zeng
    • 27
  • Xiaofan Zhang
    • 52
  • Xiaohe Wu
    • 16
  • Xiao-Jun Wu
    • 23
  • Xinmei Tian
    • 56
  • Yan Li
    • 9
  • Yan Lu
    • 9
  • Yee Wei Law
    • 57
  • Yi Wu
    • 20
    • 29
  • Yiannis Demiris
    • 18
  • Yicai Yang
    • 40
  • Yifan Jiao
    • 30
  • Yuhong Li
    • 10
    • 52
  • Yunhua Zhang
    • 13
  • Yuxuan Sun
    • 13
  • Zheng Zhang
    • 59
  • Zheng Zhu
    • 21
    • 50
  • Zhen-Hua Feng
    • 58
  • Zhihui Wang
    • 13
  • Zhiqun He
    • 8
    • 10
  1. 1.University of LjubljanaLjubljanaSlovenia
  2. 2.University of BirminghamBirminghamUK
  3. 3.Czech Technical UniversityPragueCzech Republic
  4. 4.Linköping UniversityLinköpingSweden
  5. 5.Austrian Institute of TechnologySeibersdorfAustria
  6. 6.TU WienViennaAustria
  7. 7.Beihang UniversityBeijingChina
  8. 8.Beijing Faceall Co.BeijingChina
  9. 9.Beijing Key Laboratory of Urban Intelligent ControlBeijingChina
  10. 10.Beijing University of Posts and TelecommunicationsBeijingChina
  11. 11.China Huayin Ordnance Test CenterHuayinChina
  12. 12.Civil Aviation University of ChinaTianjinChina
  13. 13.Dalian University of TechnologyDalianChina
  14. 14.EPFLLausanneSwitzerland
  15. 15.Graz University of TechnologyGrazAustria
  16. 16.Harbin Institute of TechnologyHarbinChina
  17. 17.Hong Kong Polytechnic UniversityKowloonHong Kong
  18. 18.Imperial College LondonLondonUK
  19. 19.Indian Institute of Space Science and TechnologyThiruvananthapuramIndia
  20. 20.Indiana UniversityBloomingtonUSA
  21. 21.Institute of AutomationChinese Academy of SciencesBeijingChina
  22. 22.Institute of Information EngineeringBeijingChina
  23. 23.Jiangnan UniversityWuxiChina
  24. 24.KAISTDaejeonSouth Korea
  25. 25.Memorial University of NewfoundlandSt. John’sCanada
  26. 26.Mer Mec S.p.A.MonopoliItaly
  27. 27.Microsoft Research AsiaBeijingChina
  28. 28.Middle East Technical UniversityAnkaraTurkey
  29. 29.Nanjing Audit UniversityNanjingChina
  30. 30.National Laboratory of Pattern RecognitionBeijingChina
  31. 31.Naver CorporationSeongnamSouth Korea
  32. 32.NLPR, Institute of AutomationChinese Academy of SciencesBeijingChina
  33. 33.North China University of TechnologyBeijingChina
  34. 34.POSTECHPohangSouth Korea
  35. 35.Robotics InstituteCarnegie Mellon UniversityPittsburghUSA
  36. 36.SensetimeBeijingChina
  37. 37.Seoul National UniversitySeoulSouth Korea
  38. 38.Shanghai Jiao Tong UniversityShanghaiChina
  39. 39.Shenzhen Institute of Advanced Technology, Chinese Academy of SciencesShenzhenChina
  40. 40.South China University of TechnologyGuangzhouChina
  41. 41.Technical University of MadridMadridSpain
  42. 42.Temple UniversityPhiladelphiaUSA
  43. 43.Tianjin UniversityTianjinChina
  44. 44.Universidad Autónoma de MadridMadridSpain
  45. 45.Universidade Federal de ItajubáItajubáBrazil
  46. 46.Universidade Federal do Mato Grosso do SulCampo GrandeBrazil
  47. 47.Universitat Autónoma de BarcelonaBarcelonaSpain
  48. 48.University of AmsterdamAmsterdamNetherlands
  49. 49.University of CaliforniaMercedUSA
  50. 50.University of Chinese Academy of SciencesBeijingChina
  51. 51.University of ExterExterUK
  52. 52.University of Illinois Urbana-ChampaignUrbanaUSA
  53. 53.University of IsfahanIsfahanIran
  54. 54.University of Naples ParthenopeNaplesItaly
  55. 55.University of OxfordOxfordUK
  56. 56.University of Science and Technology of ChinaHefeiChina
  57. 57.University of South AustraliaAdelaideAustralia
  58. 58.University of SurreyGuildfordUK
  59. 59.Zhejiang UniversityHangzhouChina

Personalised recommendations