Advertisement

ViVid: A Video Feature Visualization Engine

  • Jianyu FanEmail author
  • Philippe Pasquier
  • Luciane Maria Fadel
  • Jim Bizzocchi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10290)

Abstract

Video editors are facing the challenge of montage editing when dealing with massive amount of video shots. The major problem is selecting the feature they want to use for building repetition patterns in montage editing. It is time-consuming when testing various features for repetitions and watching videos one by one. A visualization tool for video features could be useful for assisting montage editing. Such a visualization tool is not currently available. We present the design of ViVid, an interactive system for visualizing video features for particular target videos. ViVid is a generic tool for computer-assisted montage and for the design of generative video arts, which could take advantage of the information of video features for rendering the piece. The system computes sand visualizes the color information, motion and texture information data. Instead of visualizing original feature data frame by frame, we re-arranged the data and used both statistics of video feature data and frame level data to represent the video. The system uses dashboards to visualize multiple dimensional data in multiple views. We used the project of Seasons as a case study for testing the tool. Our feasibility study shows that users are satisfied with the visualization tool.

Keywords

Video features Data visualization 

Notes

Acknowledgement

We would like to acknowledge the Social Sciences and Humanities Research Council of Canada and Ministry of Education CAPES Brazil for their ongoing financial support. And we would like to thank the reviewers, who through their thoughtful comments have been assisting with this publication.

References

  1. 1.
    Galanter, P.: What is generative art? Complexity theory as a context for art theory. Digit. Creat. (2009)Google Scholar
  2. 2.
    Bizzocchi, J.: Ambient Video (2012). https://ambientvideo.org/seasons/
  3. 3.
    Eisenstein, S.: Film Form: Essays in Film Theory. Harcourt Brace and Company, New York (1949)Google Scholar
  4. 4.
    Walid, M.: Eisenstein and Montage: Battleship Potemkin, essay. https://www.academia.edu/7875638/Eisenstein_and_Montage_Battleship_Potemkin
  5. 5.
    Bunge, M.: Philosophical inputs and outputs of technology. In: Doner, D. (ed.) The History and Philosophy of Technology. University of Illinois, Champaign (1979)Google Scholar
  6. 6.
    Vaishnavi, V., Kuechler, W.: Design Research in Information Systems (2011). http://desrist.org/design-research-in-informationsystems
  7. 7.
    Rashid, U., Viviani, M., Pasi, G.: A graph-based approach for visualizing and exploring a multimedia search result space. Inf. Sci. 370–371(20), 303–322 (2016)CrossRefGoogle Scholar
  8. 8.
    Chandrasegaran, S., Badam, S.K., Kisselburgh, L., Peppler, K., Elmqvist, N., Ramani, K.: VizScribe: a visual analytics approach to understand designer behavior. Int. J. Hum.-Comput. Stud. 100, 66–80 (2016)CrossRefGoogle Scholar
  9. 9.
    Mayor, O., Llimona, Q., Marchini, M., Papiotis, P., Maestre, E:. repoVizz: a framework for remote storage, browsing, annotation, and exchange of multi-modal data. In: Proceedings of the 21st ACM International Conference on Multimedia, MM 2013, October 2013Google Scholar
  10. 10.
    Liu, Y., Shi, Z.: Boosting video description generation by explicitly translating from frame-level captions. In: Proceedings of the 2016 ACM on Multimedia Conference, MM 2016, September 2016Google Scholar
  11. 11.
    Chen, T., Lu, A., Hu, S.: Visual storylines: semantic visualization of movie sequence. Comput. Graph. 36(4), 241–249 (2012)CrossRefGoogle Scholar
  12. 12.
    Bhattacharya, S., Gupta, S., Venkatesh, K.S.: Video shot detection & story board generation using video decompositio. In: Proceedings of the Sixth International Conference on Computer and Communication Technology, ICCCT 2015, September 2015Google Scholar
  13. 13.
    Tanase, C., Giangreco, I., Rossetto, L., Schuldt, H., Seddati, O., Dupont, S., Altiok, O.C., Sezgin, M.: Semantic sketch-based video retrieval with autocompletion. In: Companion Publication of the 21st International Conference on Intelligent User Interfaces, IUI 2016 Companion, March 2016Google Scholar
  14. 14.
    Few, S.: Practical Rules for Using Color in Charts, Perceptual Edge Visual Business Intelligence Newsletter February (2008)Google Scholar
  15. 15.
    Van Wijk, J.J.: Cluster and calendar based visualization of time series data. In: Proceedings of INFOVIS 1999, pp. 4–9 (1999)Google Scholar
  16. 16.
    Vadivel, A., Sural, S., Majumdar, A.K.: Robust histogram generation from the HSV space based on visual colour perception. Int. J. Sig. Imaging Syst. Eng. InderScience 1(3/4), 245–254 (2008)Google Scholar
  17. 17.
    Fan, J., Li, W., Bizzocchi, J., Bizzocchi, J., Pasquier, P.: DJ-MVP: an automatic music video producer. In: Proceedings of the Advances in Computer Entertainment Technology Conference, ACE 2016, November 2016Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Jianyu Fan
    • 1
    Email author
  • Philippe Pasquier
    • 1
  • Luciane Maria Fadel
    • 2
  • Jim Bizzocchi
    • 1
  1. 1.Simon Fraser UniversityVancouverCanada
  2. 2.Federal University of Santa CatarinaFlorianópolisBrazil

Personalised recommendations