Multimedia Tools and Applications

, Volume 56, Issue 2, pp 351–364 | Cite as

Capturing contextual relationship for effective media search

  • Guang-Ho Cha


One of the central problems regarding media search is the semantic gap between the low-level features computed automatically from media data and the human interpretation of them. This is because the notion of similarity is usually based on high-level abstraction but the low-level features do not sometimes reflect the human perception. In this paper, we assume the semantics of media is determined by the contextual relationship in a dataset, and introduce the method to capture the contextual information from a large media (especially image) dataset for effective search. Similarity search in an image database based on this contextual information shows encouraging experimental results.


Contextual relationship Media search Semantic gap Similarity search 


  1. 1.
    Barnard K, Forsyth D (2003) Learning the semantics of words and pictures. J Mach Learn Res 3:1107–1135MATHGoogle Scholar
  2. 2.
    De Valois RL, De Valois KK (1988) Spatial vision. Oxford Science, OxfordGoogle Scholar
  3. 3.
    Ennis D (1998) Modeling similarity and identification when there are momentary fluctuations in psychological amplitudes. Multidimensional models of perception and cognition. Lawrence Erlbaum Assoc. Pub, Philadelphia, pp 279–298Google Scholar
  4. 4.
    Goh K-S, Li B, Chang E (2002) DynDex: a dynamic and non-metric space indexer. Proc ACM Multimedia, pp 466–475Google Scholar
  5. 5.
    Haykin S (1994) Neural networks: a comprehensive foundation. Maxmillan, NYMATHGoogle Scholar
  6. 6.
    He X, Ma W-Y, Zhang H-J (2004) Learning an image manifold for retrieval. Proc. ACM Multimedia Conf, pp 17–23Google Scholar
  7. 7.
    Hoi C-H, Lyu M (2004) A novel log-based relevance feedback technique in content-based image retrieval. Proc ACM Multimedia Conf, pp 24–31Google Scholar
  8. 8.
    Ishikawa Y, Subramanya R, Faloutsos C (1998) MindReader: querying databases through multiple examples. Proc VLDB Conf pp 218–227Google Scholar
  9. 9.
    Jeon J, Lavrenko V, Manmatha R (2003) Automatic image annotation and retrieval using cross-media relevance models. Proc of ACM SIGIR Conf, pp 119–126Google Scholar
  10. 10.
    Muneesawang P, Guan L (2004) An interactive approach for CBIR using a network of radial basis functions. IEEE Trans Multimedia 6(5):703–716CrossRefGoogle Scholar
  11. 11.
    Pan JY, Yang HJ, Duygulu P, Faloutsos C (2004) Automatic image captioning. Proc IEEE Int’l Conf on Multimedia and Expo, pp 1987–1990Google Scholar
  12. 12.
    Rui Y, Huang TS, Mehrotra S (1997) Content-based image retrieval with relevance feedback in MARS. Proc of Int’l Conf. on Image Processing, pp 815–818Google Scholar
  13. 13.
    Rui Y, Huang TS, Ortega M, Mehrotra S (1998) Relevance feedback: a power tool for interactive content-based image retrieval. IEEE Transactions on Circuits and Systems for Video Technology 8(5):644–655CrossRefGoogle Scholar
  14. 14.
    Schölkopf B, Kung S, Burges C, Girosi F, Niyogi P, Poggio T, Vapnik V (1997) Comparing support vector machines with Gaussian kernels to radial basis function classifiers. IEEE Trans Signal Process 45:2758–2765CrossRefGoogle Scholar
  15. 15.
    Schölkopf B, Smola A, Müller K (1998) Nonlinear component analysis as a Kernel Eigenvalue problem. Neural Comput 10:1299–1319CrossRefGoogle Scholar
  16. 16.
    Shepard R (1987) Toward a universal law of generalization for psychological science. Science 237:1317–1323MATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Shi J, Malik J (2000) Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell 22(8):888–905CrossRefGoogle Scholar
  18. 18.
    Srikanth M, Varner J, Bowden M, Moldovan D (2005) Exploiting ontologies for automatic image annotation. Proc ACM SIGIR Conf, pp 552–558Google Scholar
  19. 19.
    Tong S, Chang E (2001) Support vector machine active learning for image retrieval. Proc ACM Multimedia Conf, pp 107–118Google Scholar
  20. 20.
    Vapnik VN (1998) Statistical learning theory. Wiley, NYMATHGoogle Scholar
  21. 21.
    Wu G, Chang EY, Panda N (2005) Formulating context-dependent similarity functions. Proc ACM Multimedia, pp 725–734Google Scholar
  22. 22.
    Wu L, Faloutsos C, Sycara K, Payne TR (2000) FALCON: feedback adaptive loop for content-based retrieval. Proc of VLDB Conf, pp 297–306Google Scholar
  23. 23.
    Zhou D, Bousquet O, Lal TN, Weston J, Schölkopf B (2004) Learning with local and global consistency, advances in neural information processing systems. MIT, Cambridge, p 16Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Department of Computer EngineeringSeoul National University of Science and TechnologySeoulSouth Korea

Personalised recommendations