Advertisement

Video Search via Ranking Network with Very Few Query Exemplars

  • De ChengEmail author
  • Lu Jiang
  • Yihong Gong
  • Nanning Zheng
  • Alexander G. Hauptmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10133)

Abstract

This paper addresses the challenge of video search with only a handful query exemplars by proposing a triplet ranking network-based method. Based on the typical scenario for video search system, a user begins the query process by first utilizing the metadata-based text-to-video search module to find an initial set of videos of interest in the video repository. As bridging the semantic gap between text and video is very challenging, usually only a handful relevant videos appear in the initial retrieved results. The user now can use the video-to-video search module to train a new classifier to search more relevant videos. However, since we found that statistically only fewer than 5 videos are initially relevant, training a complex event classifier with a handful of examples is extremely challenging. Therefore, it is necessary to improve video retrieval method that works for a handful of positive training example videos. The proposed triplet ranking network is mainly designed for this situation and has the following properties: (1) This ranking network can learn an off-line similarity matching projection, which is event independent, from other previous video search tasks or datasets. Such that even with only one query video, we can search its relative videos. Then this method can transfer previous knowledge to the specific video retrieval tasks as more and more relative videos being retrieved, to further improve the retrieval performance; (2) It casts the video search task as a ranking problem, and can exploit partial ordering information in the dataset; (3) Based on the above two merits, this method is suitable for the case where only a handful of positive examples exploit. Experimental results show the effectiveness of our proposed method on video retrieval with only a handful of positive exemplars.

Keywords

Video search Few positives Partially ordered Ranking network Knowledge adaptation 

Notes

Acknowledgement

This work was supported by the National Basic Research Program of China (Grant No.2015CB351705), the State Key Program of National Natural Science Foundation of China (Grant No.61332018).

References

  1. 1.
  2. 2.
    Apostolidis, E., Mezaris, V., Sahuguet, M., Huet, B., Červenková, B., Stein, D., Eickeler, S., Redondo Garcia, J.L., Troncy, R., Pikora, L.: Automatic fine-grained hyperlinking of videos within a closed collection using scene segmentation. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 1033–1036. ACM (2014)Google Scholar
  3. 3.
    Bhattacharya, S., Yu, F.X., Chang, S.-F.: Minimally needed evidence for complex event recognition in unconstrained videos. In: ICMR, p. 105. ACM (2014)Google Scholar
  4. 4.
    Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2(3), 27 (2011)Google Scholar
  5. 5.
    Cheng, D., Gong, Y., Zhou, S., Wang, J., Nanning, Z.: Person re-identification by multi-channel parts-based CNN with improved triplet loss function. In: CVPR (2016)Google Scholar
  6. 6.
    Gkalelis, N., Mezaris, V.: Video event detection using generalized subclass discriminant analysis and linear support vector machines. In: ICMR, p. 25. ACM (2014)Google Scholar
  7. 7.
    Habibian, A., Mensink, T., Snoek, C.G.: Composite concept discovery for zero-shot video event detection. In: ICMR, p. 17. ACM (2014)Google Scholar
  8. 8.
    Hauptmann, A.G., Christel, M.G., Yan, R.: Video retrieval based on semantic concepts. Proc. IEEE 96(4), 602–622 (2008)CrossRefGoogle Scholar
  9. 9.
    Jiang, L., Meng, D., Mitamura, T., Hauptmann, A.G.: Easy samples first: self-paced reranking for zero-example multimedia search. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 547–556. ACM (2014)Google Scholar
  10. 10.
    Jiang, L., Yu, S.-I., Meng, D., Mitamura, T., Hauptmann, A.G.: Bridging the ultimate semantic gap: a semantic search engine for internet videos. In: ICMR (2015)Google Scholar
  11. 11.
    Ma, Z., Yang, Y., Sebe, N., Hauptmann, A.G.: Knowledge adaptation with partiallyshared features for event detectionusing few exemplars. PAMI 36, 1789–1802 (2014)CrossRefGoogle Scholar
  12. 12.
    Mazloom, M., Li, X., Snoek, C.G.: Few-example video event retrieval using tag propagation. In: Proceedings of International Conference on Multimedia Retrieval, p. 459. ACM (2014)Google Scholar
  13. 13.
    Tamrakar, A., Ali, S., Yu, Q., Liu, J., Javed, O., Divakaran, A., Cheng, H., Sawhney, H.: Evaluation of low-level features and their combinations for complex event detection in open source videos. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3681–3688. IEEE (2012)Google Scholar
  14. 14.
    Thomee, B., Shamma, D.A., Friedland, G., Elizalde, B., Ni, K., Poland, D., Borth, D., Li, L.-J.: YFCC100M: the new data in multimedia research. Commun. ACM 59(2), 64–73 (2016)CrossRefGoogle Scholar
  15. 15.
    Wu, S., Bondugula, S., Luisier, F., Zhuang, X., Natarajan, P.: Zero-shot event detection using multi-modal fusion of weakly supervised concepts. In: CVPR, pp. 2665–2672 (2014)Google Scholar
  16. 16.
    Xu, Z., Yang, Y., Hauptmann, A.G.: A discriminative CNN video representation for event detection. In: CVPR (2015)Google Scholar
  17. 17.
    Yu, S.-I., Jiang, L., Xu, Z., Yang, Y., Hauptmann, A.G.: Content-based video search over 1 million videos with 1 core in 1 second. In: ICMR (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • De Cheng
    • 1
    • 2
    Email author
  • Lu Jiang
    • 2
  • Yihong Gong
    • 1
  • Nanning Zheng
    • 1
  • Alexander G. Hauptmann
    • 2
  1. 1.Xi’an Jiaotong UniversityXi’anChina
  2. 2.Carnegie Mellon UniversityPittsburghUSA

Personalised recommendations