Advertisement

A Fusion Approach to Grayscale-Thermal Tracking with Cross-Modal Sparse Representation

  • Lin Li
  • Chenglong Li
  • ZhengZheng Tu
  • Jin Tang
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 875)

Abstract

Grayscale-thermal tracking receives much attention recently due to the complementary benefits of the visible and thermal infrared modalities in over- coming the imaging limitations of individual source. This paper investigates how to perform effective fusion of the grayscale and thermal information for robust object tracking. We propose a novel fusion approach based on the cross-modal sparse representation in the Bayesian filtering framework. First, to exploit the interdependence of different modalities, we take both the intra- and inter-modality constraints into account in the sparse representation, i.e., cross-modal sparse rep- resentation. Moreover, we introduce the modality weights in our model to achieve adaptive fusion. Second, unlike conventional methods, we employ the reconstruction residues and coefficients together to define the likelihood probability for each candidate sample generated by the motion model. Finally, the object is located by finding the candidate sample with the maximum likelihood probability. Experimental results on the public benchmark dataset suggest that the proposed approach performs favourably against the state-of-the-art grayscale-thermal trackers.

Keywords

Multi-modal Fusion Laplacian matrix Sparse representation Bayesian filtering 

Notes

Acknowledgement

This work was supported in part by the Natural Science Foundation of Anhui Higher Education Institution of China under Grants KJ2017A017, and in part by the Co- Innovation Center for Information Supply & Assurance Technology, Anhui University under Grant Y01002449.

Supplementary material

470895_1_En_49_MOESM1_ESM.pdf (60 kb)
Supplementary material 1 (pdf 60 KB)

References

  1. 1.
    Bunyak, F., Palaniappan, K., Nath, S.K., Seetharaman, G.: Geodesic active contour based fusion of visible and infrared video for persistent object tracking. In: Proceedings of IEEE Workshop on Applications of Computer Vision (2007)Google Scholar
  2. 2.
    Conaire, C.O., Connor, N.E., Cooke, E., Smeaton, A.F.: Comparison of fusion methods for thermo-visual surveillance tracking. In: Proceedings of International Conference on Information Fusion (2006)Google Scholar
  3. 3.
    Conaire, C.O., Connor, N.E., Smeaton, A.: Thermo-visual feature fusion for object tracking using multiple spatiogram trackers. Mach. Vis. Appl. 7, 1–12 (2007)Google Scholar
  4. 4.
    Cvejic, N., et al.: The effect of pixel-level fusion on object tracking in multi-sensor surveillance video. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2007)Google Scholar
  5. 5.
    Gade, R., Moeslund, T.B.: Thermal cameras and applications: a survey. Mach. Vis. Appl. 25, 245–262 (2014)CrossRefGoogle Scholar
  6. 6.
    Hare, S., Saffari, A., Torr, P.H.S.: Struck: structured output tracking with kernels. In: Proceedings of IEEE International Conference on Computer Vision (2011)Google Scholar
  7. 7.
    Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking-learning-detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1409–1422 (2011)CrossRefGoogle Scholar
  8. 8.
    Leykin, A., Hammoud, R.: Pedestrian tracking by fusion of thermal-visible surveillance videos. Mach. Vis. Appl. 21(4), 587–595 (2010)CrossRefGoogle Scholar
  9. 9.
    Li, C., Cheng, H., Hu, S., Liu, X., Tang, J., Lin, L.: Learning collaborative sparse representation for grayscale-thermal tracking. IEEE Trans. Image Process. 25(12), 5743–5756 (2016)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Li, C., Wang, X., Zhang, L., Tang, J., Wu, H., Lin, L.: WELD: weighted low-rank decomposition for robust grayscale-thermal foreground detection. IEEE Trans. Circ. Syst. Video Technol. 27, 725–738 (2016).  https://doi.org/10.1109/TCSVT20162556586CrossRefGoogle Scholar
  11. 11.
    Liu, H., Sun, F.: Fusion tracking in color and infrared images using joint sparse representation. Inf. Sci. 55(3), 590–599 (2012)MathSciNetGoogle Scholar
  12. 12.
    Mei, X., Ling, H.: Robust visual tracking using l1 minimization. In: Proceedings of IEEE International Conference on Computer Vision (2009)Google Scholar
  13. 13.
    Wu, Y., Blasch, E., Chen, G., Bai, L., Ling, H.: Multiple source data fusion via sparse representation for robust visual tracking. In: Proceedings of International Conference on Information Fusion (2011)Google Scholar
  14. 14.
    Zhang, J., Ma, S., Sclaroff, S.: MEEM: robust tracking via multiple experts using entropy minimization. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 188–203. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10599-4_13CrossRefGoogle Scholar
  15. 15.
    Zhang, K., Zhang, L., Yang, M.-H.: Real-time compressive tracking. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7574, pp. 864–877. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33712-3_62CrossRefGoogle Scholar
  16. 16.
    Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via multi-task sparse learning. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2012)Google Scholar
  17. 17.
    Zhong, W., Lu, H., Yang, M.H.: Robust object tracking via sparsity-based collaborative model. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2012)Google Scholar
  18. 18.
    Zhuang, B., Lu, H., Xiao, Z., Wang, D.: Visual tracking via discriminative sparse similarity map. IEEE Trans. Image Process. 23(4), 1872–1881 (2014)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.School of Computer Science and TechnologyAnhui UniversityHefeiChina

Personalised recommendations