Skip to main content

A Video Object Extraction Algorithm Based on Depth Map for Multi-view Video

  • Conference paper
Book cover Future Wireless Networks and Information Systems

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 143))

  • 2787 Accesses

Abstract

Video sequences have the rich texture information in practical applications, which makes the extraction of the semantic objects of interest more difficult. This paper presents a video object extraction algorithm based on depth map for multi-view video coding in three-dimensional video system. First of all, gradient operators are used to roughly segment color image into flat and texture regions with threshold, so object contours are extracted, while The OTSU algorithm is used to distinguish backgrounds and foregrounds in the color image, which can fill the pixels of semantic objects. Then the corresponding depth map is processed to highlight the human vision interested regions. At the same time, inter-frame difference is taken into account, which joins the moving objects into foregrounds, and extracts the interested region with morphological operations. Finally, object of block level is obtained though combination of operators outlined above and block-process though threshold. Compared with the existing algorithms, the proposed algorithm does not adopt popular clustering scheme but joins the OTSU algorithm, thus it can effectively avoid lots of computational complexity which the clustering algorithm brings. Experimental results show that the proposed algorithm can not only extract accurately the semantic objects, but also reduce the computational complexity. Whether the objects are static or not, the proposed algorithm can get good efficient segmentation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Xu, H., Younis, A., Kabuka, M.: Automatic Moving Object Extraction for Content-Based Applications. IEEE Transactions on Circuits and Systems for Video Technology 14(6), 796–812 (2004)

    Article  Google Scholar 

  2. Haseyama, M., Yokoyama, Y.: Moving Object Extraction Using a Shape Constraint-Based Splitting Active Contour Model. In: 2005 IEEE Int. Conf. on Image Processing, Genoa, Italy, vol. 3, pp. 1260–1263 (2005)

    Google Scholar 

  3. Yang, H., Chang, Y.L., Huo, J., et al.: Depth Characteristic-Based Image Region Partition and Regional Disparity Estimation for Multi-View Video Coding. ACTA Optica Sinica 28, 1073–1078 (2010)

    Article  Google Scholar 

  4. Sum, K., Cheung, P.: A Fast Parametric Snake Model with Enhanced Concave Object Extraction Capability. In: IEEE International Symposium on Signal Processing and Information Technology, pp. 454–457 (2007)

    Google Scholar 

  5. Song, X., Fan, G.: Joint Key-Frame Extraction and Object Segmentation for Content-Based Video Analysis. IEEE Transactions on Circuits and Systems for Video Technology 16, 904–914 (2006)

    Article  Google Scholar 

  6. Pan, C., Zhang, Z., Shi, X., Shen, L.: An Approach to Video Object Extraction in the Compressed Domain. Journal of Image and Graphics 14, 904–914 (2009)

    Google Scholar 

  7. Zhang, Y.: Research on MVD Representation Based Multi-view Video Coding. Ph.D. dissertation, Graduate School of Chinese Academy of Sciences, Beijing (2010)

    Google Scholar 

  8. Mao, X.Y., Yu, M., Wang, X., et al.: Stereoscopic Image Quality Assessment Model with Three-Component Weighted Structure Similarity. In: International Conference on Audio, Language and Image Processing, Shanghai (2010)

    Google Scholar 

  9. Mei, T., Hua, X., Zhu, C., et al.: Home Video Visual Quality Assessment with Spatio-Temporal Factors. IEEE Transactions on Circuits System for Video Technology 17(6), 699–706 (2007)

    Article  Google Scholar 

  10. Lu, Z., Lin, W., Yang, X., et al.: Modeling Visual Attention’s Modulatory Aftereffects on Visual Sensitivity and Quality Evaluation. IEEE Transactions on Image Process 14(11), 1928–1942 (2009)

    Google Scholar 

  11. Zitnick, C., Kang, S., Uyttendaele, M.: High-Quality Video View Interpolation Using a Layered Representation. ACM SIGGRAPH and ACM Transactions on Graphics 4, 600–608 (2004)

    Article  Google Scholar 

  12. Tanimoto, M., Fujii, T., Fukushima, N.: 1D Parallel Test Sequences for MPEG-FTV. ISO/IEC JTC1/SC29/WG11, M15378, Archamps, France (April 2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhou Xiaoliang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag GmbH Berlin Heidelberg

About this paper

Cite this paper

Xiaoliang, Z. et al. (2012). A Video Object Extraction Algorithm Based on Depth Map for Multi-view Video. In: Zhang, Y. (eds) Future Wireless Networks and Information Systems. Lecture Notes in Electrical Engineering, vol 143. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-27323-0_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-27323-0_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-27322-3

  • Online ISBN: 978-3-642-27323-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics