Semantic Feature Selection for Object Discovery in High-Resolution Remote Sensing Imagery

  • Dihua Guo
  • Hui Xiong
  • Vijay Atluri
  • Nabil Adam
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4426)


Given its importance, the problem of object discovery in High-Resolution Remote-Sensing (HRRS) imagery has been given a lot of attention by image retrieval researchers. Despite the vast amount of expert endeavor spent on this problem, more effort has been expected to discover and utilize hidden semantics of images for image retrieval. To this end, in this paper, we exploit a hyperclique pattern discovery method to find complex objects that consist of several co-existing individual objects that usually form a unique semantic concept. We consider the identified groups of co-existing objects as new feature sets and feed them into the learning model for better performance of image retrieval. Experiments with real-world datasets show that, with new semantic features as starting points, we can improve the performance of object discovery in terms of various external criteria.


Image Retrieval Semantic Feature Individual Object Object Discovery Composite Object 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Barnard, K., et al.: Matching words and pictures. Machine learning research 3(1), 1107–1135 (2003)zbMATHCrossRefGoogle Scholar
  2. 2.
    Duygulu, P., et al.: Object recognition as machine translation: learning a lexcicon for a fixed image vocabulary. In: Heyden, A., et al. (eds.) ECCV 2002. LNCS, vol. 2353, pp. 97–112. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  3. 3.
    Feng, S.L., Manmatha, R., Lavrenko, V.: Multiple bernoulli relevance models for image and video annotation. In: CVPR, pp. 1002–1009 (2004)Google Scholar
  4. 4.
    Guo, D., Atluri, V., Adam, N.: Texture-based remote-sensing image segmentation. In: ICME, pp. 1472–1475 (2005)Google Scholar
  5. 5.
    ecognition userguide (2004), http://www.definiens Scholar
  6. 6.
    Jeon, J., Lavrenko, V., Manmatha, R.: Automatic image annotation and retrieval using cross-media relevance models. In: SIGIR, pp. 254–261 (2003)Google Scholar
  7. 7.
    Lavrenko, V., Choquette, M., Croft, W.: Cross-lingual relevance models. In: SIGIR, pp. 175–182 (2002)Google Scholar
  8. 8.
    Lavrenko, V., Croft, W.: Relevance-based language models. In: SIGIR, pp. 120–127 (2001)Google Scholar
  9. 9.
    Mori, Y., Takahashi, H., Oka, R.: Image-to-word transformation based on dividing and vector quantizing images with words. In: MISRM (1999)Google Scholar
  10. 10.
    Sheikholeslami, G., Chang, W., Zhang, A.: Semquery: Semantic clustering and querying on heterogeneous features for visual data. TKDE 14(5), 988–1002 (2002)Google Scholar
  11. 11.
    Shekhar, S., Chawla, S.: Spatial Databases: A Tour, p. 300. Prentice-Hall, Englewood Cliffs (2003)Google Scholar
  12. 12.
    Wang, L., et al.: Automatic image annotation and retrieval using weighted feature selection. In: IEEE-MSE, Kluwer Publisher, Dordrecht (2004)Google Scholar
  13. 13.
    Xiong, H., Tan, P., Kumar, V.: Mining strong affinity association patterns in data sets with skewed support distribution. In: ICDM, pp. 387–394 (2003)Google Scholar
  14. 14.
    Xiong, H., Tan, P., Kumar, V.: Hyperclique pattern discovery. Data Mining and Knowledge Discovery Journal 13(2), 219–242 (2006)CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Dihua Guo
    • 1
  • Hui Xiong
    • 1
  • Vijay Atluri
    • 1
  • Nabil Adam
    • 1
  1. 1.MSIS Department, Rutgers UniversityUSA

Personalised recommendations