Automated Music Video Generation Using Multi-level Feature-based Segmentation
The expansion of the home video market has created a requirement for video editing tools to allow ordinary people to assemble videos from short clips. However, professional skills are still necessary to create a music video, which requires a stream to be synchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requires a number of trials to obtain a satisfactory synchronization, which is something that most amateurs are unable to achieve.
Our aim is automatically to extract a sequence of clips from a video and assemble them to match a piece of music. Previous authors [8, 9, 16] have approached this problem by trying to synchronize passages of music with arbitrary frames in each video clip using predefined feature rules. However, each shot in a video is an artistic statement by the video-maker, and we want to retain the coherence of the video-maker’s intentions as far as possible.
KeywordsVideo Frame Video Segment Music Video Video Segmentation Brightness Feature
This research is accomplished as the result of the promotion project for culture contents technology research center supported by Korea Culture & Content Agency (KOCCA).
- 1.Avid Technology Inc (2007) User guide for pinnacle studio 11. Avid Technology Inc, TewksburyGoogle Scholar
- 3.Foote J, Cooper M, Girgensohn A (2002) Creating music videos using automatic media analysis. In: Proceedings of ACM multimedia. ACM, New York, pp 553–560Google Scholar
- 4.Gose E, Johnsonbaugh R, Jost S (1996) Pattern recognition and image analysis. Prentice Hall, Englewood CliffsGoogle Scholar
- 6.Helmholtz HL (1954) On the sensation of tone as a physiological basis for the theory of music. Dover (translation of original text 1877)Google Scholar
- 7.Hu M (1963) Visual pattern recognition by moment invariants. IRE Trans Inf Theo 8(2):179–187Google Scholar
- 8.Hua XS, Lu L, Zhang HJ (2003) Ave—automated home video editing. In: Proceedings of ACM multimedia. ACM, New York, pp 490–497Google Scholar
- 9.Hua XS, Lu L, Zhang HJ (2004) Automatic music video generation based on temporal pattern analysis. In: 12th ACM international conference on multimedia. ACM, New York, pp 472–475Google Scholar
- 11.Jehan T, Lew M, Vaucelle C (2003) Cati dance: self-edited, self-synchronized music video. In: SIGGRAPH conference abstracts and applications. SIGGRAPH, Sydney, pp 27–31Google Scholar
- 12.Lan DJ, Ma YF, Zhang HJ (2003) A novel motion-based representation for video mining. In: Proceedings of the IEEE international conference on multimedia and expo. IEEE, Piscataway, pp 6–9Google Scholar
- 13.Lee HC, Lee IK (2005) Automatic synchronization of background music and motion in computer animation. In: Proceedings of eurographics 2005, Dublin, 29 August–2 September 2005, pp 353–362Google Scholar
- 14.Lucas B, Kanade T (1981) An iterative image registration technique with an application to stereo vision. In: Proceedings of 7th international joint conference on artificial intelligence (IJCAI), Vancouver, August 1981, pp 674–679Google Scholar
- 17.Murat Tekalp A (1995) Digital video processing. Prentice Hall, Englewood CliffsGoogle Scholar