Advertisement

Extraction of Key Frames Using Rough Set Theory for Video Retrieval

  • G. S. Naveen KumarEmail author
  • V. S. K. Reddy
Conference paper
  • 17 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1118)

Abstract

A key frame is an illustrative frame consisting of the entire shot data. It is used to analyze, classify, index and recover video. The present algorithms produce appropriate representative frames and also produce irrelevant representative frames. Some of the algorithms are not able to generate exact key frames for entire shot. To overcome this problem, we proposed a better and efficient scheme based on DC coefficients and rough sets to achieve better results when compared to the rest. This proposed algorithm extracts the exact key frames since it eliminates the distinctness of the selection of key frames. This algorithm is applicable only for compacted MPEG videos. So decoding is not necessary. Thus, the performance of the proposed algorithm exhibits its efficacy in results.

Keywords

DC coefficients Rough set theory Representative frame Video retrieval 

References

  1. 1.
    Yu, L., Huang, Z., Cao, J., Shen, H.T.: Scalable video event retrieval by visual state binary embedding. IEEE Trans. Multimedia 18(8), 1590–1603 (2016)CrossRefGoogle Scholar
  2. 2.
    Pont-Tuset, J., Farre, M.A., Smolic, A.: Semi automatic video object segmentation by advanced manipulation of segmentation hierarchies. In: International Workshop on Content-Based Multimedia Indexing (2015)Google Scholar
  3. 3.
    Gray, C., James, S., Collomosse, J.: A particle filtering approach to salient video object localization. In: IEEE International Conference on Image Processing, pp. 194–198 (2014)Google Scholar
  4. 4.
    Naveed, E., Mehmood, I., Baik, S.W.: Efficient visual attention based framework for extracting key frames from videos. Sign. Process. Image Commun. 28(1), 34–44 (2013)CrossRefGoogle Scholar
  5. 5.
    Guogang, H., Chen, C.W.: Distributed video coding with zero motion skip and efficient DCT coefficient encoding. In: 2008 IEEE International Conference on Multimedia and Expo, EEE, pp. 777–780 (2008)Google Scholar
  6. 6.
    Wang, T., Wu, J., Chen, L.: An approach to video key-frame extraction based on rough set. In: 2007 MUE’07 International Conference on Multimedia and Ubiquitous Engineering, IEEE, pp. 590–596 (2007)Google Scholar
  7. 7.
    Gianluigi, C., Raimondo, S.: An innovative algorithm for key frame extraction in video summarization. J. Real-Time Image Proc. 1(1), 69–88 (2006)CrossRefGoogle Scholar
  8. 8.
    Liu, G., Zhao, J.: Key frame extraction from MPEG video stream. In: 2010 Third International Symposium on Information Processing (ISIP), IEEE, pp. 423–427 (2010)Google Scholar
  9. 9.
    Xu, J., Yuting, S., Liu, Q.: Detection of double MPEG-2 compression based on distributions of DCT coefficients. Int. J. Pattern Recognit Artif Intell. 27(01), 1354001 (2013)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Uehara, T., Safavi-Naini, R., Ogunbona, P.: Recovering DC coefficients in block-based DCT. IEEE Trans. Image Process. 15(11), 3592–3596 (2006)CrossRefGoogle Scholar
  11. 11.
    Pawlak, Z.: Rough set theory and its applications to data analysis. Cybern. Syst. 29(7), 661–688 (1998)CrossRefGoogle Scholar
  12. 12.
    Shirahama, K., Matsuoka, Y., Uehara, K.: Event retrieval in video archives using rough set theory and partially supervised learning. Multimedia Tools Appl. 57(1), 145–173 (2012)CrossRefGoogle Scholar
  13. 13.
    Yang, H.Y., Li, Y.W., Li, W.Y., Wang, X.Y., Yang, F.Y.: Content-based image retrieval using local visual attention feature. J. Vis. Commun. Image Represent. 25(6), 1308–1323 (2014)CrossRefGoogle Scholar
  14. 14.
    Borth, D., Ulges, A., Schulze, C., Breuel, T.M.: Keyframe extraction for video tagging & summarization. Informatiktage 2008, 45–48 (2008)Google Scholar
  15. 15.
    Luo, Y., Junsong, Y.: Salient object detection in videos by optimal spatio-temporal path discovery. In: Proceedings of ACM International conference on Multimedia, pp. 509–512 (2013)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Faculty of EngineeringLincoln University CollegePetaling JayaMalaysia

Personalised recommendations