Video Content Representation Based on Texture and Lighting
When dealing with yet unprocessed video, structuring and extracting features according to models that reflect the idiosyncrasies of a video data category (film, news, etc.) are essential for guaranteeing the content annotation, and thus the use of video. In this paper, we present methods for automatic extraction of texture and lighting features of representative frames of video data shots. These features are the most important elements which characterize the development of plastic (physical) space in the film video. They are also important in other video categories. Texture and lighting are two basic properties, or features, of video frames represented in the general film model presented in . This model is informed by the internal components and interrelationships known and used in the film application domain. The method for extraction of texture granularity is based on the approach for measuring the granularity as the spatial rate of change of the image intensity , where we extend it to color textures. The method for extraction of lighting feature is based on the approach of closed solution schemes , which we improve by making it more general and more effective.
KeywordsVideo Data Automatic Extraction Video Shot Texture Granularity Color Texture
Unable to display preview. Download preview PDF.
- M. J. Brooks and B. K. P. Horn. Shape and Source from Shading. A.I. Memo 820, M.I.T., 1985.Google Scholar
- M. Flickner. Query by Image and Video Content. IEEE Computer, 28(9):23–32, September 1995.Google Scholar
- S. Gibbs, C. Breiteneder, and D. Tsichritzis. Audio/Video Databases: An Object-Oriented Approach. In A. Elmagarmid and E. Neuhold, editors, Proceedings of 9th International Conference on Data Engineering (ICDE’93), pages 381–390, Vienna, Austria, April 1993. IEEE Computer Society Press.Google Scholar
- R. Hjelsvold and R. Midstraum. Modelling and Querying Video Data. In M. Maybury, editor, Proceedings of 20th Conference on Very Large Databases (VLDB’94), pages 686–695, Santiago, Chile, September 1994. AAAI Press and The MIT Press.Google Scholar
- J. Monaco. How to Read a Film: The Art, Technology, Language, History and Theory of Film and Media. Oxford University Press, 1977.Google Scholar
- A. Nagasaka and Y. Tanaka. Automatic Video Indexing and Full-Video Search for Object Appearances. In E. Knuth and L. Wegner, editors, Visual Database Systems, II, pages 113–127. Elsevier Science Publishers B.V., 1992.Google Scholar
- G. Paschos and I. Radev. Video Sequence Cut Detection in Di_erent Color Spaces. In IASTED International Conference on Signal and Image Processing (SIP’99), Nassau, Bahamas, October 1999.Google Scholar
- A. P. Pentland. Finding the Illuminant Direction. Journal of Optical Society of America, pages 448–455, 1982.Google Scholar
- I. Radev, N. Pissinou, and K. Makki. Film Video Modeling. In Proceedings of IEEE Workshop on Knowledge and Data Engineering Exchange (KDEX’99), Chicago, Illinois, November 1999.Google Scholar
- L. Rowe, J. Boreczky, and C. Eads. Indexes for Access to Large Video Databases. In W. Niblack and R. Jain, editors, Proceedings of Conference on Storage and Retrieval for Image and Video Databases II (SPIE’94), volume 2185, pages 150–161, San Jose, California, February 1994. SPIE.Google Scholar