Skip to main content

A Method of Film Clips Retrieval Using Image Queries Based on User Interests

  • Chapter
  • First Online:
Cognitive Internet of Things: Frameworks, Tools and Applications (ISAIR 2018)

Part of the book series: Studies in Computational Intelligence ((SCI,volume 810))

Included in the following conference series:

  • 786 Accesses

Abstract

The emergence of entertainment industry motivates the explosive growth of automatically film trailer. Manually finding desired clips from these large amounts of films is time-consuming and tedious, which makes finding the moments of user major or special preference becomes an urgent problem. Moreover, the user subjectivity over a film makes no fixed trailer meets all user interests. This paper addresses these problems by posing a query-related film clip extraction framework which optimizes selected frames to both semantically query-related and visually representative of the entire film. The experimental results show that our query-related film clip retrieval method is particularly useful for film editing, e.g. showing the abstraction of the entire film while playing focus on the parts that matches the user queries.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Gygli, M., Grabner, H., Riemenschneider, H., Gool, L.V.: Creating summaries from user videos. In: European Conference on Computer Vision, pp. 505–520 (2014)

    Google Scholar 

  2. Joshi, N., Kienzle, W., Toelle, M., Uyttendaele, M., Cohen, M.F.: Real-time hyperlapse creation via optimal frame selection. ACM Trans. Graph. 34(4), 63 (2015)

    Article  Google Scholar 

  3. Ghosh, J., Yong, J.L., Grauman, K.: Discovering Important People and Objects for Egocentric Video Summarization, vol. 157, no. 10, pp. 1346–1353 (2012)

    Google Scholar 

  4. Lu, Z., Grauman, K.: Story-driven summarization for egocentric video. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2714–2721 (2013)

    Google Scholar 

  5. Truong, B.T., Venkatesh, S.: Video abstraction: a systematic review and classification. ACM Trans. Multimed. Comput. Commun. Appl. 3(1), 3 (2007)

    Article  Google Scholar 

  6. Dan, B.G., Curless, B., Salesin, D., Seitz, S.M.: Schematic Storyboarding for Video Visualization and Editing, pp. 862–871 (2006)

    Google Scholar 

  7. Bacco, R., Lambert, P., Lambert, P., Ionescu, B.E.: Video summarization from spatio-temporal features. In: ACM Trecvid Video Summarization Workshop, pp. 144–148 (2008)

    Google Scholar 

  8. Liu, T., Kender, J.R.: Optimization algorithms for the selection of key frame sequences of variable length. In: European Conference on Computer Vision, pp. 403–417 (2002)

    Google Scholar 

  9. Potapov, D., Douze, M., Harchaoui, Z., Schmid, C.: Category-specific video summarization. In: European Conference on Computer Vision, pp. 540–555 (2014)

    Google Scholar 

  10. Yong, J.L., Ghosh, J., Grauman, K.: Discovering important people and objects for egocentric video summarization. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1346–1353 (2012)

    Google Scholar 

  11. Gygli, M., Song, Y., Cao, L.: Video2gif: automatic generation of animated gifs from video. In: Computer Vision and Pattern Recognition, pp. 1001–1009 (2016)

    Google Scholar 

  12. Lu, H., Li, Y., Chen, M., Kim, H., Serikawa, S.: Brain intelligence: go beyond artificial intelligence. Mob. Netw. Appl. 23, 368–375 (2018)

    Article  Google Scholar 

  13. Yao, T., Mei, T., Rui, Y.: Highlight detection with pairwise deep ranking for first-person video summarization. In: Computer Vision and Pattern Recognition, pp. 982–990 (2016)

    Google Scholar 

  14. Sun, M., Zeng, K.H., Lin, Y., Ali, F.: Semantic highlight retrieval and term prediction. IEEE Trans. Image Process. 26(7), 3303–3316 (2017)

    Article  MathSciNet  Google Scholar 

  15. Mahasseni, B., Lam, M., Todorovic, S.: Unsupervised video summarization with adversarial LSTM networks. In: Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  16. Li, Y., Lu, H., Li, J., Li, X., Li, Y., Serikawa, S.: Underwater image de-scattering and classification by deep neural network. Comput. Electr. Eng. 54, 68–77 (2016)

    Article  Google Scholar 

  17. Vasudevan, A.B., Gygli, M., Volokitin, A., Van Gool, L.: Query-Adaptive Video Summarization via Quality-Aware Relevance Estimation, pp. 582–590 (2017)

    Google Scholar 

  18. Kulesza, A., Taskar, B.: Determinantal point processes for machine learning. Found. Trends Mach. Learn. 5(2–3), 17 (2012)

    MATH  Google Scholar 

  19. Zhang, K., Chao, W.L., Sha, F., Grauman, K.: Video summarization with long short-term memory. In: ECCV, pp. 766–782 (2016)

    Google Scholar 

  20. Azadi, S., Feng, J., Darrell, T.: Learning Detection with Diverse Proposals (2017)

    Google Scholar 

  21. Gong, B., Chao, W.L., Grauman, K., Sha, F.: Diverse sequential subset selection for supervised video summarization. In: International Conference on Neural Information Processing Systems, pp. 2069–2077 (2014)

    Google Scholar 

  22. Sharghi, A., Gong, B., Shah, M.: Query-focused extractive video summarization. In: European Conference on Computer Vision, pp. 3–19 (2016)

    Google Scholar 

  23. Sharghi, A., Laurel, J.S., Gong, B.: Query-Focused Video Summarization: Dataset, Evaluation, and a Memory Network Based Approach (2017)

    Google Scholar 

  24. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the Inception Architecture for Computer Vision, pp. 2818–2826 (2015)

    Google Scholar 

  25. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Li, F.F.: Imagenet: a large-scale hierarchical image database. In: Computer Vision and Pattern Recognition, pp. 248–255 (2009)

    Google Scholar 

  26. Zhang, C.L., Luo, J.H., Wei, X.S., Wu, J.: In defense of fully connected layers in visual representation transfer. In: Pacific-Rim Conference on Multimedia (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ling Zou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Zou, L., Wang, H., Chen, P., Wei, B. (2020). A Method of Film Clips Retrieval Using Image Queries Based on User Interests. In: Lu, H. (eds) Cognitive Internet of Things: Frameworks, Tools and Applications. ISAIR 2018. Studies in Computational Intelligence, vol 810. Springer, Cham. https://doi.org/10.1007/978-3-030-04946-1_9

Download citation

Publish with us

Policies and ethics