Skip to main content

Effectiveness of Video Ontology in Query by Example Approach

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 6890))

Abstract

In this paper, we develop a video retrieval method based on Query-By-Example (QBE) approach where a query is represented by providing example shots. Relevant shots to the query are then retrieved by constructing a retrieval model from example shots. However, one drawback of QBE is that a user can only provide a small number of example shots, while each shot is generally represented by a high-dimensional feature. In such a case, a retrieval model tends to be overfit to feature dimensions which are specific to example shots, but are ineffective for retrieving relevant shots. As a result, many clearly irrelevant shots are retrieved. To overcome this, we construct a video ontology as a knowledge base for QBE-based video retrieval. Specifically, our video ontology is used to select concepts related to a query. Then, irrelevant shots are filtered by referring to recognition results of objects corresponding to selected concepts. Lastly, QBE-based video retrieval is performed on the remaining shots to obtain a final retrieval result. The effectiveness of our video ontology is tested on TRECVID 2009 video data.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Snoek, C., et al.: The MediaMill TRECVID 2009 Semantic Video Search Engine. In: Proc. of TRECVID 2009, pp. 226–238 (2009)

    Google Scholar 

  2. Ngo, C., et al.: VIREO/DVM at TRECVID 2009: High-Level Feature Extraction, Automatic Video Search and Content-Based Copy Detection. In: Proc. of TRECVID 2009, pp. 415–432 (2009)

    Google Scholar 

  3. Jiang, Y., Yang, J., Ngo, C., Hauptmann, A.: Representations of Keypoint-Based Semantic Concept Detection: A Comprehensive Study. IEEE Transactions on Multimedia 12(1), 42–53 (2010)

    Article  Google Scholar 

  4. Smeaton, A., Over, P., Kraaij, W.: Evaluation campaigns and TRECVid. In: Proc. of MIR 2006, pp. 321–330 (2006)

    Google Scholar 

  5. Naphade, M., Smith, J., Tešić, J., Chang, S., Hsu, W., Kennedy, L., Hauptmann, A., Curtis, J.: Large-Scale Concept Ontology for Multimedia. IEEE Multimedia 13(3), 86–91 (2006)

    Article  Google Scholar 

  6. Horridege, M., et al.: A Practical Guid to Building OWL Ontologies Using The Protege-OWL Plugin and CO-ODE Tools Edition 1.0 (2004), http://www.co-ode.org/resources/tutorials/ProtegeOWLTutorial.pdf

  7. Shirahama, K., Matsuoka, Y., Uehara, K.: Video event retrieval from a small number of examples using rough set theory. In: Lee, K.-T., Tsai, W.-H., Liao, H.-Y.M., Chen, T., Hsieh, J.-W., Tseng, C.-C. (eds.) MMM 2011 Part I. LNCS, vol. 6523, pp. 96–106. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  8. Wei, X., Ngo, C.: Fusing Semantics, Observability,Reliablity and Diversity of Concept Detectors for Video Search. In: Proc. of ACM MM 2008, pp. 81–90 (2008)

    Google Scholar 

  9. Francois, A., Nevatia, R., Hobbs, J., Bolles, R.: VERL: An ontology framework for representing and annotating video events. IEEE multimedia 12(4), 76–86 (2005)

    Article  Google Scholar 

  10. Pattanasri, N., Chatvichienchai, S., Tanaka, K.: Towards a Unified Framework for Context-Preserving Video Retrieval and Summarization. In: Fox, E.A., Neuhold, E.J., Premsmit, P., Wuwongse, V. (eds.) ICADL 2005. LNCS, vol. 3815, pp. 119–128. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  11. Sande, K., Gevers, T., Snoek, C.: Evaluating Color Descriptors for Object and Scene Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 32(9), 1582–1596 (2010)

    Article  Google Scholar 

  12. Komorowski, J., Øhrn, A., Skowron, A.: The ROSETTA Rough Set Software System. In: Klsgen, W., Zytkow, J. (eds.) Handbook of Data Mining and Knowledge Discovery. ch. D.2.3. Oxford University Press, Oxford (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Shirahama, K., Uehara, K. (2011). Effectiveness of Video Ontology in Query by Example Approach. In: Zhong, N., Callaghan, V., Ghorbani, A.A., Hu, B. (eds) Active Media Technology. AMT 2011. Lecture Notes in Computer Science, vol 6890. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23620-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-23620-4_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-23619-8

  • Online ISBN: 978-3-642-23620-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics