Advertisement

Temporal Saliency for Fast Motion Detection

  • Hamed Rezazadegan Tavakoli
  • Esa Rahtu
  • Janne Heikkilä
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7728)

Abstract

This paper presents a novel saliency detection method and apply it to motion detection. Detection of salient regions in videos or images can reduce the computation power which is needed for complicated tasks such as object recognition. It can also help us to preserve important information in tasks like video compression. Recent advances have given birth to biologically motivated approaches for saliency detection. We perform salience estimation by measuring the change in pixel’s intensity value within a temporal interval while performing a filtering step via principal component analysis that is intended to suppress noise. We applied the method to Background Models Challenge (BMC) video data set. Experiments show that the proposed method is apt and accurate. Additionally, the method is fast to compute.

Keywords

Salient Object Salient Region Saliency Detection Evaluation Sequence Subspace Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Elgammal, A., Harwood, D., Davis, L.: Non-parametric Model for Background Subtraction. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 751–767. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  2. 2.
    Monnet, A., Mittal, A., Paragios, N., Ramesh, V.: Background modeling and subtraction of dynamic scenes. In: 2003 Proceedings. Ninth IEEE International Conference on Computer Vision, vol. 2, pp. 1305–1312 (2003)Google Scholar
  3. 3.
    Cucchiara, R., Grana, C., Piccardi, M., Prati, A.: Detecting moving objects, ghosts, and shadows in video streams. IEEE Transactions on Pattern Analysis and Machine Intelligence 25, 1337–1342 (2003)CrossRefGoogle Scholar
  4. 4.
    Zivkovic, Z., van der Heijden, F.: Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recogn. Lett. 27, 773–780 (2006)CrossRefGoogle Scholar
  5. 5.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 1254–1259 (1998)CrossRefGoogle Scholar
  6. 6.
    Gao, D., Mahadevan, V., Vasconcelos, N.: On the plausibility of the discriminant center-surround hypothesis for visual saliency. Journal of Vision 8 (2008)Google Scholar
  7. 7.
    Itti, L., Baldi, P.: Bayesian surprise attracts human attention. Vision Research 49, 1295–1306 (2009); Visual Attention: Psychophysics, electrophysiology and neuroimagingGoogle Scholar
  8. 8.
    Miyazato, K., Kimura, A., Takagi, S., Yamato, J.: Real-time estimation of human visual attention with dynamic bayesian network and mcmc-based particle filter. In: IEEE International Conference on Multimedia and Expo, ICME 2009, pp. 250–257 (2009)Google Scholar
  9. 9.
    Rahtu, E., Kannala, J., Salo, M., Heikkilä, J.: Segmenting Salient Objects from Images and Videos. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part V. LNCS, vol. 6315, pp. 366–379. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  10. 10.
    Mahadevan, V., Vasconcelos, N.: Spatiotemporal saliency in dynamic scenes. IEEE Transactions on Pattern Analysis and Machine Intelligence 32, 171–177 (2010)CrossRefGoogle Scholar
  11. 11.
    Doretto, G., Chiuso, A., Wu, Y., Soatto, S.: Dynamic textures. International Journal of Computer Vision 51, 91–109 (2003)zbMATHCrossRefGoogle Scholar
  12. 12.
    Gopalakrishnan, V., Hu, Y., Rajan, D.: Sustained Observability for Salient Motion Detection. In: Kimmel, R., Klette, R., Sugimoto, A. (eds.) ACCV 2010, Part III. LNCS, vol. 6494, pp. 732–743. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  13. 13.
    Tarokh, M.: Measures for controllability, observability and fixed modes. IEEE Transactions on Automatic Control 37, 1268–1273 (1992)MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    Bouwmans, T.: Subspace learning for background modeling: A survey. Recent Patents on Computer Science 2, 223–234 (2009)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Hamed Rezazadegan Tavakoli
    • 1
  • Esa Rahtu
    • 1
  • Janne Heikkilä
    • 1
  1. 1.Center for Machine Vision ResearchUniversity of OuluOuluFinland

Personalised recommendations