A robust object tracking framework based on a reliable point assignment algorithm

  • Rong-feng Zhang
  • Ting Deng
  • Gui-hong Wang
  • Jing-lun Shi
  • Quan-sheng Guan
Article
  • 75 Downloads

Abstract

Visual tracking, which has been widely used in many vision fields, has been one of the most active research topics in computer vision in recent years. However, there are still challenges in visual tracking, such as illumination change, object occlusion, and appearance deformation. To overcome these difficulties, a reliable point assignment (RPA) algorithm based on wavelet transform is proposed. The reliable points are obtained by searching the location that holds local maximal wavelet coefficients. Since the local maximal wavelet coefficients indicate high variation in the image, the reliable points are robust against image noise, illumination change, and appearance deformation. Moreover, a Kalman filter is applied to the detection step to speed up the detection processing and reduce false detection. Finally, the proposed RPA is integrated into the tracking-learning-detection (TLD) framework with the Kalman filter, which not only improves the tracking precision, but also reduces the false detections. Experimental results showed that the new framework outperforms TLD and kernelized correlation filters with respect to precision, f-measure, and average overlap in percent.

Key words

Local maximal wavelet coefficients Reliable point assignment Object tracking Tracking learning detection (TLD) Kalman filter 

CLC number

TP391.41 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgments

The authors thank Professor Sheng-ming JIANG for his good advice.

References

  1. Bay, H., Ess, A., Tuytelaars, T., et al., 2008. Speeded-up robust features (SURF). Comput. Vis. Image Understand., 110(3): 346–359. http://dx.doi.org/10.1016/j.cviu.2007.09.014CrossRefGoogle Scholar
  2. Brox, T., Bruhn, A., Papenberg, N., et al., 2004. High accuracy optical flow estimation based on a theory for warping. European Conf. on Computer Vision, p.25–36. http://dx.doi.org/10.1007/978-3-540-24673-2_3MATHGoogle Scholar
  3. Cheng, C.W., Ou, W.L., Fan, C.P., 2016. Fast ellipse fitting based pupil tracking design for human-computer interaction applications. IEEE Int. Conf. on Consumer Electronics, p.445–446. http://dx.doi.org/10.1109/ICCE.2016.7430685Google Scholar
  4. Dalal, N., Triggs, B., 2005. Histograms of oriented gradients for human detection. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, p.886–893. http://dx.doi.org/10.1109/CVPR.2005.177Google Scholar
  5. Elhamod, M., Levine, M.D., 2013. Automated real-time detection of potentially suspicious behavior in public transport areas. IEEE Trans. Intell. Transp. Syst., 14(2): 688–699. http://dx.doi.org/10.1109/TITS.2012.2228640CrossRefGoogle Scholar
  6. Elmenreich, W., Koplin, M.A., 2011. Time-triggered object tracking subsystem for advanced driver assistance systems. Elektrotechn. Inform., 128(6): 203–208. http://dx.doi.org/10.1007/s00502-011-0004-xCrossRefGoogle Scholar
  7. Gonzalez, R.C., Woods, R.E., 2002. Digital Image Processing (2nd Ed.). Prentice Hall, Inc., New Jersey.Google Scholar
  8. Harris, C., Stephens, M., 1988. A combined corner and edge detector. Proc. Alvey Vision Conf., p.147–151. http://dx.doi.org/10.5244/C.2.23Google Scholar
  9. Henriques, J.F., Caseiro, R., Martins, P., et al., 2015. High-speed tracking with kernelized correlation filters. IEEE Trans. Patt. Anal. Mach. Intell., 37(3): 583–596. http://dx.doi.org/10.1109/TPAMI.2014.2345390CrossRefGoogle Scholar
  10. Jeong, J.M., Yoon, T.S., Park, J.B., 2014. Kalman filter based multiple objects detection-tracking algorithm robust to occlusion. Proc. SICE Annual Conf., p.941–946. http://dx.doi.org/10.1109/SICE.2014.6935235Google Scholar
  11. Jia, C.X., Wang, Z.L., Wu, X., et al., 2015. A trackinglearning-detection (TLD) method with local binary pattern improved. IEEE Int. Conf. on Robotics and Biomimetics, p.1625–1630. http://dx.doi.org/10.1109/ROBIO.2015.7419004Google Scholar
  12. Jung, Y., Yoon, Y., 2015. Behavior tracking model in dynamic situation using the risk ratio EM. Int. Conf. on Information Networking, p.444–448. http://dx.doi.org/10.1109/ICOIN.2015.7057942Google Scholar
  13. Kalal, Z., Mikolajczyk, K., Matas, J., 2010a. Forwardbackward error: automatic detection of tracking failures. 20th Int. Conf. on Pattern Recognition, p.23–26. http://dx.doi.org/10.1109/ICPR.2010.675Google Scholar
  14. Kalal, Z., Matas, J., Mikolajczyk, K., 2010b. P-N learning: bootstrapping binary classifiers by structural constraints. IEEE Conf. on Computer Vision and Pattern Recognition, 49–56. http://dx.doi.org/10.1109/CVPR.2010.5540231Google Scholar
  15. Kalal, Z., Mikolajczyk, K., Matas, J., 2012. Trackinglearning-detection. IEEE Trans. Patt. Anal. Mach. Intell., 34(7): 1409–1422. http://dx.doi.org/10.1109/TPAMI.2011.239CrossRefGoogle Scholar
  16. Kalman, R.E., 1960. A new approach to linear filtering and prediction problems. J. Basic Eng., 82(1): 35–45. http://dx.doi.org/10.1115/1.3662552CrossRefGoogle Scholar
  17. Kaur, H., Sahambi, J.S., 2015. Vehicle tracking using fractional order Kalman filter for non-linear system. Int. Conf. on Computing, Communication and Automation, p.474–479. http://dx.doi.org/10.1109/CCAA.2015.7148423CrossRefGoogle Scholar
  18. Kong, H., Akakin, H.C., Sarma, S.E., 2013. A generalized Laplacian of Gaussian filter for blob detection and its applications. IEEE Trans. Cybern., 43(6): 1719–1733. http://dx.doi.org/10.1109/TSMCB.2012.2228639CrossRefGoogle Scholar
  19. Li, Y., Zhu, J.K., Hoi, S.C.H., 2015. Reliable patch trackers: robust visual tracking by exploiting reliable patches. IEEE Conf. on Computer Vision and Pattern Recognition, p.353–361. http://dx.doi.org/10.1109/CVPR.2015.7298632Google Scholar
  20. Liu, S., Zhang, T.Z., Cao, X.C., et al., 2016. Structural correlation filter for robust visual tracking. IEEE Conf. on Computer Vision and Pattern Recognition, p.4312–4320. http://dx.doi.org/10.1109/CVPR.2016.467Google Scholar
  21. Liu, T., Wang, G., Yang, Q.X., 2015. Real-time part-based visual tracking via adaptive correlation filters. IEEE Conf. on Computer Vision and Pattern Recognition, p.4902–4912. http://dx.doi.org/10.1109/CVPR.2015.7299124Google Scholar
  22. Lowe, D.G., 2004. Distinctive image features from scaleinvariant keypoints. Int. J. Comput. Vis., 60(2): 91–110. http://dx.doi.org/10.1023/B:VISI.0000029664.99615.94CrossRefGoogle Scholar
  23. Ning, G.H., Zhang, Z., Huang, C., et al., 2016. Spatially supervised recurrent convolutional neural networks for visual object tracking. arXiv:1607.05781v1.Google Scholar
  24. Prakash, U.M., Thamaraiselvi, V.G., 2014. Detecting and tracking of multiple moving objects for intelligent video surveillance systems. 2nd Int. Conf. on Current Trends in Engineering and Technology, p.253–257. http://dx.doi.org/10.1109/ICCTET.2014.6966297Google Scholar
  25. Redmon, J., Divvala, S., Girshick, R., et al., 2016. You only look once: unified, real-time object detection. IEEE Conf. on Computer Vision and Pattern Recognition, p.779–788. http://dx.doi.org/10.1109/CVPR.2016.91Google Scholar
  26. Sun, X., Yao, H.X., Zhang, S.P., 2010. A refined particle filter method for contour tracking. SPIE, 7744:77441M. http://dx.doi.org/10.1117/12.863450Google Scholar
  27. Tarkov, M.S., Dubynin, S.V., 2013. Real-time object tracking by CUDA-accelerated neural network. J. Comput. Sci. Appl., 1(1): 1–4. http://dx.doi.org/10.12691/jcsa-1-1-1Google Scholar
  28. Viola, P., Jones, M., 2001. Rapid object detection using a boosted cascade of simple features. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, p.511–518. http://dx.doi.org/10.1109/CVPR.2001.990517Google Scholar
  29. Xu, F., Gao, M., 2010. Human detection and tracking based on HOG and particle filter. 3rd Int. Congress on Image and Signal Processing, p.1503–1507. http://dx.doi.org/10.1109/CISP.2010.5646273Google Scholar
  30. Yu, H.M., Zeng, X., 2015. Visual tracking combined with ranking vector SVM. J. Zhejiang Univ. (Eng. Sci.), 49(6): 1015–1021 (in Chinese). http://dx.doi.org/10.3785/j.issn.1008-973X.2015.06.003Google Scholar
  31. Yu, W.S., Tian, X.H., Hou, Z.Q., et al., 2015. Multi-scale mean shift tracking. IET Comput. Vis., 9(1): 110–123. http://dx.doi.org/10.1049/iet-cvi.2014.0077CrossRefGoogle Scholar
  32. Zhang, R.F., Xiao, H.H., Deng, T., et al., 2016. A robust point detection algorithm based on wavelet transform for visual tracking. Int. Congress on Image and Signal Processing, Biomedical Engineering and Informatics, p.1–5. http://dx.doi.org/10.1109/CISP-BMEI.2016.7852672Google Scholar

Copyright information

© Zhejiang University and Springer-Verlag GmbH Germany, part of Springer Nature 2017

Authors and Affiliations

  • Rong-feng Zhang
    • 1
    • 2
  • Ting Deng
    • 3
  • Gui-hong Wang
    • 1
  • Jing-lun Shi
    • 1
  • Quan-sheng Guan
    • 1
  1. 1.School of Electronic and Information EngineeringSouth China University of TechnologyGuangzhouChina
  2. 2.School of Electronic and Information EngineeringGuangzhou College of South China University of TechnologyGuangzhouChina
  3. 3.Information Network Engineering and Research CenterSouth China University of TechnologyGuangzhouChina

Personalised recommendations