Fast and sub-pixel precision target tracking algorithm for intelligent dual-resolution camera

  • Zhuang He
  • Qi LiEmail author
  • Huajun Feng
  • Zhihai Xu
Original Article


The intelligent dual-resolution camera can provide large imaging field and high-resolution for the parts in which we are interested simultaneously. It has important applications in visual tracking. Though many trackers show great robustness on recent benchmarks, few of them make high precision and run in real-time, which is harmful to practical applications. In this paper, we propose a fast and sub-pixel precision tracker. It uses the time-shift property of Fourier transform to convert the displacement of target in space domain into the period of the response in Fourier domain. Then an improved Hough transform is introduced to measure this period, so that the displacement with sub-pixel precision can be calculated. Furthermore, a method obtaining benchmarks utilizing the intelligent dual-resolution camera is proposed to verify the high-precision of our tracker. Many experiments have been done to compare the proposed tracker with some state-of-the-art sub-pixel precision trackers, and the results have shown that the proposed tracker outperform the best tracker by 15.4% in average median distance precision, while feasible for real-time tracking with the speed faster than 80 fps. What’s more, the proposed tracker have been evaluated on the dual-resolution camera, the results show that it can make the observation of targets in the high-resolution image more complete.


Visual tracking Optical design of instruments Dual-resolution camera Image processing Pattern recognition Sub-pixel precision 



This work was supported by joint fund Project 6141A02022307 of Ministry of Education of the People’s Republic of China.


  1. 1.
    Sugimura, D., Kobayashi, S., Hamamoto, T.: Concept of dual-resolution light field imaging using an organic photoelectric conversion film for high-resolution light field photography. Appl. Opt. 56(31), 8687 (2017)CrossRefGoogle Scholar
  2. 2.
    Li, W., Shan, S., Liu, H.: High-precision method of binocular camera calibration with a distortion model. Appl. Opt. 56(8), 2368 (2017)CrossRefGoogle Scholar
  3. 3.
    Liveira, M., Santos, V.: Multi-camera active perception system with variable image perspective for mobile robot navigation (2009)Google Scholar
  4. 4.
    Orteu, J.-J., Bugarin, F., Harvent, J., Robert, L., Velay, V.: Multiple-camera instrumentation of a single point incremental forming process pilot for shape and 3D displacement measurements: methodology and results. Exp. Mech. 51(4), 625–639 (2011)CrossRefGoogle Scholar
  5. 5.
    Lien, M.-J., Kurillo, G., Bajcsy, R.: Multi-camera tele-immersion system with real-time model driven data compression. Vis Comput 26(1), 3–15 (2010)CrossRefGoogle Scholar
  6. 6.
    Chen, K., Chen, Y., Feng, H., Xu, Z.: Detail preserving exposure fusion for a dual sensor camera. Opt. Rev. 21(6), 769–774 (2014)CrossRefGoogle Scholar
  7. 7.
    Chen, K., Chen, Y., Feng, H., Xu, Z.: Fast image super-resolution for a dual-resolution camera. Opt. Rev. 22(3), 434–442 (2015)CrossRefGoogle Scholar
  8. 8.
    Arulampalam, M., Maskell, S., Gordon, N., Clapp, T.: A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 50(2), 174–188 (2002)CrossRefGoogle Scholar
  9. 9.
    Possegger, H., Mauthner, T., Bischof, H.: In defense of color-based model-free tracking. In: Proceedings of the Computer Vision and Pattern Recognition (2015)Google Scholar
  10. 10.
    Bousetouane, F.: Improved mean shift integrating texture and color features for robust real time object tracking. Vis. Comput. 29(3), 155–170 (2013)CrossRefGoogle Scholar
  11. 11.
    Wang, Y., Chen, H., Li, S.: Object tracking by color distribution fields with adaptive hierarchical structure. Vis. Comput. 33(2), 1–13 (2017)CrossRefGoogle Scholar
  12. 12.
    Kulikowsk, C.: Robust tracking using local sparse appearance model and K-selection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2011)Google Scholar
  13. 13.
    Yang, M.-H., Lu, H., Zhong, W.: Robust object tracking via sparsity-based collaborative model. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2012)Google Scholar
  14. 14.
    Ji, H., Ling, H., Wu, Y., Bao, C.: Real time robust L1 tracker using accelerated proximal gradient approach. In: Proceedings of the Computer Vision and Pattern Recognition (2012)Google Scholar
  15. 15.
    Ross, D.-A., Lim, J., Lin, R.-S., Yang, M.-H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(1–3), 125–141 (2008)CrossRefGoogle Scholar
  16. 16.
    Wang, Y., Wei, X., Ding, L.: A robust visual tracking method via local feature extraction and saliency detection. Vis. Comput. (2019). Google Scholar
  17. 17.
    Zhang, D., Zhang, Z., Zou, L.: Part-based visual tracking with spatially regularized correlation filters. Vis. Comput. (2019). Google Scholar
  18. 18.
    Henriques, J.-F., Rui, C., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)CrossRefGoogle Scholar
  19. 19.
    Danelljan, M., Hager, G., Khan, F.-S.: Accurate scale estimation for robust visual tracking. In: Proceedings of the British Machine Vision Conference (2014)Google Scholar
  20. 20.
    Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O., Torr, P.-H.-S.: Staple: complementary learners for real-time tracking; proceedings of the computer vision and pattern recognition (2016)Google Scholar
  21. 21.
    Chen, Z., Liu, P., Du, Y.: Long-term correlation tracking via spatial–temporal context. Vis. Comput. (2019). Google Scholar
  22. 22.
    Hare, S., Saffari, A. Torr, P.-H.-S.: Struck: structured output tracking with kernels. In: Proceedings of the IEEE International Conference on Computer Vision (2012)Google Scholar
  23. 23.
    Wang, Z., Yoon, S., Xie, S.-J.: Visual tracking with semi-supervised online weighted multiple instance learning. Vis. Comput. 32(3), 307–320 (2016)CrossRefGoogle Scholar
  24. 24.
    Adankon, M.-M., Cheriet, M.: Support vector machine. Comput. Sci. 1(4), 1–28 (2002)zbMATHGoogle Scholar
  25. 25.
    Wu, Y., Lim, J., Yang, M.-H.: Online object tracking: a benchmark. In: Proceedings of the Computer Vision and Pattern Recognition (2013)Google Scholar
  26. 26.
    Wu, Y., Lim, J., Yang, M.-H.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015)CrossRefGoogle Scholar
  27. 27.
    Kristan, M., Leonardis, A., Matas, J., et al.: The Visual Object Tracking VOT2016 Challenge Results. Springer, Berlin (2016)CrossRefGoogle Scholar
  28. 28.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001 CVPR (2003)Google Scholar
  29. 29.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2005)Google Scholar
  30. 30.
    Ma, Z., Wu, E.: Real-time and robust hand tracking with a single depth camera. Vis. Comput. 30(10), 1133–1144 (2014)CrossRefGoogle Scholar
  31. 31.
    Comaniciu, D., Mee, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002)CrossRefGoogle Scholar
  32. 32.
    Rui, C., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. In: Proceedings of the European Conference on Computer Vision (2012)Google Scholar
  33. 33.
    Zhang, K., Zhang, L., Liu, Q., Zhang, D., Yang, M.-H.: Fast tracking via spatio-temporal context learning. Comput. Sci. 127–141 (2013)Google Scholar
  34. 34.
    Nam, H., Baek, M., Han, B.: Modeling and propagating cnns in a tree structure for visual tracking (2016). arXiv:1608.07242
  35. 35.
    Bertinetto, L., Valmadre, J., Henriques, J.-F., Vedaldi, A., Torr, P.-H.-S.: Fully-convolutional siamese networks for object tracking. In: Proceedings of the European Conference on Computer Vision (2016)Google Scholar
  36. 36.
    Nam, H., Han, B.: Learning multi-domain convolutional neural networks for visual tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4293–4302 (2016)Google Scholar
  37. 37.
    Kuglin, C.D., Hines, D.C.: The phase correlation image alignment method. In: IEEE International Conference on Cybernetics and Society (1975)Google Scholar
  38. 38.
    Foroosh, H., Zerubia, J.B., Berthod, M.: Extension of phase correlation to subpixel registration. IEEE Trans. Image Process. 11(3), 188–200 (2002)CrossRefGoogle Scholar
  39. 39.
    Stone, H.S.: A fast direct Fourier-based algorithm for subpixel registration of images. IEEE Trans. Geosci. Remote Sens. 39(10), 2235–2243 (2001)CrossRefGoogle Scholar
  40. 40.
    Kim, J.-B., Hang, J.-K.: Efficient region-based motion segmentation for a video monitoring system. Pattern Recogn. Lett. 24(1–3), 113–128 (2003)CrossRefGoogle Scholar
  41. 41.
    Hough, P.-V.-C.: Method and means for recognizing complex patterns. U.S. Patent 3,069,654 (1962)Google Scholar
  42. 42.
    Duda, R.-O., Hart, P.-E.: Use of the Hough transformation to detect lines and curves in pictures. Cacm 15(1), 11–15 (1972)CrossRefzbMATHGoogle Scholar
  43. 43.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference, pp. 147–151 (1988)Google Scholar
  44. 44.
    Lucas, B.-D., Kanade, T.: An iterative image registration technique with an application to stereo vision (DARPA). Nutr. Cycl. Agroecosyst. 83(1), 13–26 (1981)Google Scholar
  45. 45.
    Danelljan, M., Robinson, A., Khan, F.-S., Felsberg, M.: Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking. Springer, Berlin (2016)Google Scholar
  46. 46.
    Ttger, T.-B., Ulrich, M., Steger, C.: Subpixel-precise tracking of rigid objects in real-time. In: Proceedings of the Scandinavian Conference on Image Analysis (2017)Google Scholar
  47. 47.
    Li, Z., Yu, X., Li, P.: Moving object tracking based on multi-independent features distribution fields with comprehensive spatial feature similarity. Vis. Comput. Int. J. Comput. Gr. 31(12), 1633–1651 (2015)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.State Key Laboratory of Modern Optical InstrumentsZhejiang UniversityHangzhouChina

Personalised recommendations