Advertisement

A Novel Background Normalization Technique with Textural Pattern Analysis for Multiple Target Tracking in Video

  • D. Mohanapriya
  • K. Mahesh
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 750)

Abstract

Visual tracking plays a central part in various computer vision applications, such as editing in video, surveillance, and computer interaction. It is nothing but detecting the targeted moving object in the video dataset. The basic principle of video tracking is that the frames were retrieved from the video dataset for performing the comparison operation to detect the moving object present in the video. There are lot of techniques were emerged in research in order to tract the object effectively. Since there is a lack in accuracy and other parameters related with processing techniques. The researchers involved in video tracking field facing a problems in tracking the targeted object occurs due to the presence of shadow in the video dataset. So, it is necessary to eliminate the shadows of the targeted object in the video. In this paper novel algorithm are proposed to eliminate the shadow pixels and track the target regions from the video frame and also to get the result with better accuracy. BCP-GPP is a new method used for the extraction of background and retrieval of pattern from moving object. The extracted target is used for classifying the regions from the target using the novel Machine Learning Classification (MLC) algorithm. It is used for matching the grid as the tracked region and provides the binary label for separating the background and the foreground. Then the target is tracked using the blob-based extraction technique. Then the performance of the video tracking system is analyzed using several parameters such as sensitivity, specificity, and accuracy.

Keywords

Video editing Video tracking Shadows Accuracy Object tracking 

References

  1. 1.
    Mohanapriya, D., Mahesh, K.: Robust video tracking system with shadow suppression based on feature extraction. Aust. J. Basic Appl. Sci. 10(12), 307–311 (2016)Google Scholar
  2. 2.
    Mohanapriya, D., Mahesh, K.: A novel foreground region analysis using NCP-DBP texture pattern for robust visual tracking. Springer Multimedia Tools Appl. Int. J. 76(24), pp. 25731–25748 (2017)Google Scholar
  3. 3.
    Wang, D., Lu, H., Yang, M.-H.: Online object tracking with sparse prototypes. IEEE Trans. Image Process. 22, 314–325 (2013)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via structured multi-task sparse learning. Int. J. Comput. Vision 101, 367–383 (2013)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Li, Y., Jia, W., Shen, C., van Den Hengel, A: Characterness: an indicator of text in the wild. IEEE Trans. Image Process. 23, 1666–1667 (2014)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Bai, T., Li, Y.: Robust visual tracking using flexible structured sparse representation. IEEE Trans. Ind. Inform.10, 538–547 (2014)CrossRefGoogle Scholar
  7. 7.
    Minetto, R., Thome, N., Cord, M., Leite, N.J., Stolfi, J.: Snoopertext: a text detection system for automatic indexing of urban scenes. Comput. Vision Image Und. 122, 92–104 (2014)CrossRefGoogle Scholar
  8. 8.
    Makris, A., Prieur, C.: Bayesian multiple-hypothesis tracking of merging and splitting targets. IEEE Trans. Geosci. Remote Sens. 52, 7684–7694 (2014)CrossRefGoogle Scholar
  9. 9.
    Wu, L., Shivakumara, P., Lu, T., Tan, C.L.: A new technique for multi-oriented scene text line detection and tracking in video. IEEE Trans. Multimedia 17, 1137–1152 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Dept. of Computer ApplicationsAlagappa UniversityKaraikudiIndia

Personalised recommendations