Skip to main content
Log in

A novel foreground region analysis using NCP-DBP texture pattern for robust visual tracking

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Robust visual tracking is the important stage in the computer vision applications such as robotics, man-free control systems, and the visual surveillance. Accurate motion states estimation and the target representation in visual tracking system are based on the appearances of the target. The factor affects the learning of target representation is the accumulated error due to the pose, illumination changes, and the uneven background. The presence of dynamic background and the shadowing effects causes the visual drift and destructive information. Besides, the misclassification of target region induces the false detection of moving objects. The K-means and Fuzzy-C-means clustering algorithms are available to segment the foreground/background and suppress the shadow region on the basis of the non-changing background of the surveillance area. This paper proposes the novel background normalization technique with textural pattern analysis to suppress the shadow region. The Neighborhood Chain Prediction (NCP) algorithm is used to cluster the uneven background and the Differential Boundary Pattern (DBP) extracts the texture of the video frame to suppress the shadow pixels present in the frame. The lower intensity estimation and the prediction of the area around the lower intensity in proposed work enhance the pixels for shadow removal. The shadow-free frame split up into several grids and the histograms of features are extracted from the grid formatted frame. Finally, the Machine Level Classification (MLC) finds the matching grid corresponds to the tracking region and provides the binary labeling to separate the background and foreground. The proposed DBP-based visual tracking system is high robustness over the sudden illumination changes and the dynamic background due to the texture pattern analysis. The comparison of proposed NCP-DBP combination with the existing segmentation techniques regarding the accuracy, precision, recall, F-measure, success and error rate assured the effectiveness in visual tracking applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Bai T, Li YF (2012) Robust visual tracking with structured sparse representation appearance model. Pattern Recogn 45:2390–2404

    Article  MATH  Google Scholar 

  2. Bai T, Li Y (2014) Robust visual tracking using flexible structured sparse representation. IEEE Trans Ind Inf 10:538–547

    Article  Google Scholar 

  3. Bai T, Li Y-F, Zhou X (2015) Learning local appearances with sparse representation for robust and fast visual tracking. IEEE Trans Cybern 45:663–675

    Article  Google Scholar 

  4. Choi W, Savarese S (2012) A unified framework for multi-target tracking and collective activity recognition. In Computer Vision–ECCV 2012, ed: Springer, pp 215–230

  5. Jeyabharathi D, Dejey D (2016) Vehicle tracking and speed measurement system (VTSM) based on novel feature descriptor: diagonal hexadecimal pattern (DHP). J Vis Commun Image Represent 40:816–830

    Article  Google Scholar 

  6. Jia X, Lu H, Yang M-H (2012) Visual tracking via adaptive structural local sparse appearance model. IEEE Conf Comput Vis Pattern Recognit (CVPR) 2012:1822–1829

    Google Scholar 

  7. Li X, Dick A, Shen C, Van Den Hengel A, Wang H (2013) Incremental learning of 3D-DCT compact representations for robust visual tracking. IEEE Trans Pattern Anal Mach Intell 35:863–881

    Article  Google Scholar 

  8. Li Y, Jia W, Shen C, van den Hengel A (2014) Characterness: an indicator of text in the wild. IEEE Trans Image Process 23:1666–1677

    Article  MathSciNet  MATH  Google Scholar 

  9. Lin L, Lu Y, Pan Y, Chen X (2012) Integrating graph partitioning and matching for trajectory analysis in video surveillance. IEEE Trans Image Process 21:4844–4857

    Article  MathSciNet  MATH  Google Scholar 

  10. Liu N, Wu H, Lin L (2015a) Hierarchical ensemble of background models for PTZ-based video surveillance. IEEE Trans Cybern 45:89–102

    Article  Google Scholar 

  11. Liu X, Tao D, Song M, Zhang L, Bu J, Chen C (2015b) Learning to track multiple targets. IEEE Trans Neural Netw Learn Syst 26:1060–1073

    Article  MathSciNet  Google Scholar 

  12. Liu X, Zhao G, Yao J, Qi C (2015c) Background subtraction based on low-rank and structured sparse decomposition. IEEE Trans Image Process 24:2502–2514

    Article  MathSciNet  Google Scholar 

  13. Liwicki S, Zafeiriou S, Tzimiropoulos G, Pantic M (2012) Efficient online subspace learning with an indefinite kernel for visual tracking and recognition. IEEE Trans Neural Netw Learn Syst 23:1624–1636

    Article  Google Scholar 

  14. Liwicki S, Zafeiriou SP, Pantic M (2015) Online kernel slow feature analysis for temporal video segmentation and tracking. IEEE Trans Image Process 24:2955–2970

    Article  MathSciNet  Google Scholar 

  15. Lu J, Wang G, Moulin P (2014) Human identity and gender recognition from gait sequences with arbitrary walking directions. IEEE Trans Info Forensics Secur 9:51–61

    Article  Google Scholar 

  16. Makris A, Prieur C (2014) Bayesian multiple-hypothesis tracking of merging and splitting targets. IEEE Trans Geosci Remote Sens 52:7684–7694

    Article  Google Scholar 

  17. Nayak NM, Zhu Y, Chowdhury AKR (2015) Hierarchical graphical models for simultaneous tracking and recognition in wide-area scenes. IEEE Trans Image Process 24:2025–2036

    Article  MathSciNet  Google Scholar 

  18. Park C, Woehl TJ, Evans JE, Browning ND (2015) Minimum cost multi-way data association for optimizing multitarget tracking of interacting objects. IEEE Trans Pattern Anal Mach Intell 37:611–624

    Article  Google Scholar 

  19. Salti S, Lanza A, Di Stefano L (2015) Synergistic change detection and tracking. IEEE Trans Circuits Syst Video Technol 25:609–622

    Article  Google Scholar 

  20. Smeulders AW, Chu DM, Cucchiara R, Calderara S, Dehghan A, Shah M (2014) Visual tracking: an experimental survey. IEEE Trans Pattern Anal Mach Intell 36:1442–1468

    Article  Google Scholar 

  21. Sui Y, Zhang S, Zhang L (2015) Robust visual tracking via sparsity-induced subspace learning. IEEE Trans Image Process 24:4686–4700

    Article  MathSciNet  Google Scholar 

  22. Wang D, Lu H, Yang M-H (2013) Online object tracking with sparse prototypes. IEEE Trans Image Process 22:314–325

    Article  MathSciNet  MATH  Google Scholar 

  23. Wang L, Liu T, Wang G, Chan KL, Yang Q (2015) Video tracking using learned hierarchical features. IEEE Trans Image Process 24:1424–1435

    Article  MathSciNet  Google Scholar 

  24. Wu L, Shivakumara P, Lu T, Tan CL (2015) A new technique for multi-oriented scene text line detection and tracking in video. IEEE Trans Multimedia 17:1137–1152

    Article  Google Scholar 

  25. Yuan Y, Yang H, Fang Y, Lin W (2015) Visual object tracking by structure complexity coefficients. IEEE Trans Multimedia 17:1125–1136

    Article  Google Scholar 

  26. Zhang K, Zhang L, Yang M-H (2013) Real-time object tracking via online discriminative feature selection. IEEE Trans Image Process 22:4664–4677

    Article  MathSciNet  MATH  Google Scholar 

  27. Zhang L, Dibeklioglu H, van der Maaten L (2014) Speeding up tracking by ignoring features. IEEE Conf Comput Vis Pattern Recognit (CVPR) 2014:1266–1273

    Google Scholar 

  28. Zhang S, Yu X, Sui Y, Zhao S, Zhang L (2015) Object tracking with multi-view support vector machines. IEEE Trans Multimedia 17:265–278

    Google Scholar 

  29. Zhong W, Lu H, Yang M-H (2012) Robust object tracking via sparsity-based collaborative model. In Computer vision and pattern recognition (CVPR), 2012 I.E. Conference on, pp 1838–1845

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohanapriya D..

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mohanapriya D., Mahesh K. A novel foreground region analysis using NCP-DBP texture pattern for robust visual tracking. Multimed Tools Appl 76, 25731–25748 (2017). https://doi.org/10.1007/s11042-017-4409-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-017-4409-3

Keywords

Navigation