Skip to main content
Log in

Dynamic tracking re-adjustment: a method for automatic tracking recovery in complex visual environments

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Detection and analysis of events from video sequences is probably one of the most important research issues in computer vision and pattern analysis society. Before, however, applying methods and tools for analyzing actions, behavior or events, we need to implement robust and reliable tracking algorithms able to automatically monitor the movements of many objects in the scene regardless of the complexity of the background, existence of occlusions and illumination changes. Despite the recent research efforts in the field of object tracking, the main limitation of most of the existing algorithms is that they are not enriched with automatic recovery strategies able to re-initialize tracking whenever its performance severely deteriorates. This is addressed in this paper by proposing an automatic tracking recovery tool which improves the performance of any tracking algorithm whenever the results are not acceptable. For the recovery, non-linear object modeling tools are used which probabilistically label image regions to object classes. The models are also time varying. The first property is implemented in our case using concepts from functional analysis which allow parametrization of any arbitrary non-linear function (with some restrictions on its continuity) as a finite series of known functional components but of unknown coefficients. The second property is addressed by proposing an innovative algorithm that optimally estimates the non-linear model at an upcoming time instance based on the current non-linear models that have been already approximated. The architecture is enhanced by a decision mechanism which permits verification of the time instances in which tracking recovery should take place. Experimental results on a set of different video sequences that present complex visual phenomena (full and partial occlusions, illumination variations, complex background, etc) are depicted to demonstrate the efficiency of the proposed scheme in proving tracking in very difficult visual content conditions. Additionally, criteria are proposed to objectively evaluate the tracking performance and compare it with other strategies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Arulampalam S, Maskell S, Gordon N, Clapp T (2002) A tutorial on particle filters for on-line non-linear/non-Gaussian Bayesian tracking. IEEE Trans Signal Process 50(2):174–188

    Article  Google Scholar 

  2. Chen D, Yang J (2007) Robust object tracking via online dynamic spatial bias appearance models. IEEE Trans Pattern Anal Mach Intell 29(12):2157–2169

    Article  Google Scholar 

  3. Comanicu D, Ramesh V, Meer P (2000) Real-time tracking of non-rigid objects using mean shift. Proc Int’l Conf Computer Vision and Pattern Recognition, pp 142-149

  4. Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control, Signal Syst 2:303–314

    Article  MATH  MathSciNet  Google Scholar 

  5. Davatzikos C, Prince J, Bryan R (1996) Image registration based on boundary mapping. IEEE Trans Med Imag 15(1):112–115

    Article  Google Scholar 

  6. Doucet A, Godsill S, Andrieu C (2000) On sequential Monte Carlo sampling methods for Bayesian filtering. Stat Comput 10(3):197–208

    Article  Google Scholar 

  7. Doulamis A, Doulamis N, Ntalianis K, Kollias S (2003) An efficient fully unsupervised video object segmentation scheme using an adaptive neural-network classifier architecture. IEEE Trans Neural Netw 14(3):616–630

    Article  Google Scholar 

  8. Doulamis A, Kosmopoulos D, Christogiannis C, Varvarigou D (2004) Polymnia: personalised leisure and entertainment over cross media intelligent platforms. European Workshop on Integration of Knowledge, Semantics and Digital Media Technology, London, UK, 25–26 November 2004

  9. Doulamis A,van Gool L, Nixon M, Varvarigou T, Doulamis N (2008) First ACM international workshop on analysis and retrieval of events, actions and workflows in video streams. 16th ACM International Conference on Multimedia, Vancouver, Canada, October 2008

  10. Doulamis A, Kosmopoulos D, Sardis E, Varvarigou T (2008) An architecture for a self configurable video supervision. ACM Workshop on Analysis and Retrieval of Events, Actions and Workflows in Video Streams, Vancouver, Canada, October 2008

  11. Grimson W.E.L, Stauffer C (1999) Adaptive background mixture models for real-time tracking. Proc Int’l Conf Computer Vision and Pattern Recognition, pp 22–29

  12. Heisele B, Kressel U, Ritter W (1997) Tracking non-rigid, moving objects based on color cluster flow. Proc Int’l Conf Computer Vision and Pattern Recognition, pp 253–257

  13. Isard M, Blake A (1998) Condensation c conditional density propagation for visual tracking. Int J Comput Vis 29(1):5–28

    Article  Google Scholar 

  14. Jeyakar J, Venkatesh Babu R, Ramakrishnan KR (2008) Robust object tracking with background-weighted local Kernels. Comput Vis Image Underst 112:296–309

    Article  Google Scholar 

  15. Jodoin P-M, Mignotte M, Konrad J (2007) Statistical background subtraction using spatial cues. IEEE Trans Circuits Syst Video Technol 17(12):1758–1763

    Article  Google Scholar 

  16. Kass M, Witkin A, Terzopoulos D (1988) Snakes: active contour models. Int J Comput Vis 1(4):321–331

    Article  Google Scholar 

  17. Kreyszig E (1989) Introductory functional analysis with applications. Wiley, New York

    MATH  Google Scholar 

  18. Leibe B, Schindler K, Cornelis N, Van Gool L (2008) Coupled object detection and tracking from static cameras and moving vehicles. IEEE Trans Pattern Anal Mach Intell 30(10):1683–1698

    Article  Google Scholar 

  19. Leichter I, Lindenbaum M, Rivlin E (2009) Tracking by Affine Kernel transformations using color and boundary cues. IEEE Trans Pattern Anal Mach Intell 31(1):164–171

    Article  Google Scholar 

  20. Lucas B, Kanade T (1981) An iterative image registration technique with an application to stereo vision. Proc DARPA Image Understanding Workshop, pp 121–130

  21. Medeiros H, Park J, Kak A (2008) Distributed object tracking using a cluster-based Kalman filter in wireless camera networks. IEEE J Sel Topics Signal Process 2(4):448–463

    Article  Google Scholar 

  22. Nascimento JC, Marques JS (2006) Performance evaluation of object detection algorithms for video surveillance. IEEE Trans Multimedia 8(4):761–774

    Article  Google Scholar 

  23. Odobez J-M, Gatica-Perez D, Ba SO (2006) Embedding motion in model-based stochastic tracking. IEEE Trans Image Process 15(11):3515–3531

    Article  Google Scholar 

  24. Rasmussen C, Hager GD (2001) Probabilistic data association methods for tracking complex visual objects. IEEE Trans Pattern Anal Mach Intell 23(6):560–576

    Article  Google Scholar 

  25. Shahrokni A, Drummond T, Fua P (2004) Texture boundary detection for real-time tracking. Proc European Conf Computer Vision, pp 566–577

  26. Shi J, Tomasi C (1994) Good features to track. Proc Int’l Conf Computer Vision and Pattern Recognition, pp 593–600

  27. Smith GJD (2004) Behind the Screens: examining constructions of deviance and informal practices among CCTV control room operators in the UK. Surveill Soc 2(2/3):376–395

    Google Scholar 

  28. Stenger B, Ramesh V, Paragios N, Coetzee F, Bouhman J (2002) Topology free hidden markov models: application to background modeling. Proc Int’l Conf Computer Vision, pp 294–301

  29. Tekalp M (1995) Digital video processing. Prentice Hall PTR, ISBN 0131900757

  30. Tsai D-M, Lai S-C (2008) Independent component analysis-based background subtraction for indoor surveillance. IEEE Trans Image Process 18(1):158–167

    Article  Google Scholar 

  31. Wang D (1998) Unsupervised video segmentation based on watersheds and temporal tracking. IEEE Trans Circuits Syst Video Technol 8(5):539–546

    Article  Google Scholar 

  32. Wang P, Ji Q (2005) Multi-view face tracking with factorial and switching HMM. Seventh IEEE Workshops on Application of Computer Vision, 2005. WACV/MOTIONS 1:401–406

    Google Scholar 

  33. Wang H, Suter D, Schindler K, Shen C (2007) Adaptive object tracking based on an effective appearance filter. IEEE Trans Pattern Anal Mach Intell 29(9):1661–1667

    Article  Google Scholar 

  34. Yang J, Waibel A (1996) A real-time face tracker. Proc Workshop Computer Vision, pp 142–147

  35. Yilmaz A, Li X, Shah B (2004) Contour based object tracking with occlusion handling in video acquired using mobile cameras. IEEE Trans Pattern Anal Mach Intell 26(11):1531–1536

    Article  Google Scholar 

  36. Yonemoto S, Sato M (2008) Multitarget Tracking Using Mean-shift with Particle Filter based Initialization. IEEE 12th International Conference Information Visualization, pp. 521–526

  37. Zhong Y, Jain AK, Dubuisson-Jolly M-P (2000) Object tracking using deformable templates. IEEE Trans Pattern Anal Mach Intell 22(5):544–549

    Article  Google Scholar 

Download references

Acknowledgement

This work is supported by the European Union funded project SCOVIS “Self Configurable Cognitive Video Supervsion” supported by the Seventh Framework Programme (FP7/2007–2013) under grant agreement no 216465.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anastasios Doulamis.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Doulamis, A. Dynamic tracking re-adjustment: a method for automatic tracking recovery in complex visual environments. Multimed Tools Appl 50, 49–73 (2010). https://doi.org/10.1007/s11042-009-0368-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-009-0368-7

Keywords

Navigation