Editorial for the ICMR 2018 special issue
- 25 Downloads
The ACM International Conference on Multimedia Retrieval—ICMR’18—is the premier forum for presenting research results and experience reports on multimedia retrieval research and systems. In 2018, the 19th edition of the conference was organized in Yokohama (Japan). While the ICMR annual international conference only started in 2011, it is the result of the fusion of two prestigious events, namely ACM CIVR (from 2002) and ACM MIR (from 2000). In over 19 years this conference series provided researchers and practitioners across academia and industry of the field with a venue for exchanging and showcasing leading-edge multimedia retrieval ideas. Multimedia computing, indexing, and retrieval continue to be one of the most exciting and fastest-growing research areas in the field of multimedia technology.
Out of the many high-quality papers selected for presentation at the conference, we identified four with particularly glowing reviewer comments for this IJMR special issue on multimedia retrieval: “Joint Embeddings with Multimodal Cues for Video-Text Retrieval” by Niluthpol C. Mithun, Juncheng Li, Florian Metze, Amit K. Roy-Chowdhury, “Mining Exoticism from Visual Content with Fusion-based Deep Neural Networks” by Andrea Ceroni, Chenyang Ma and Ralph Ewerth, “Automatic Visual Pattern Mining from Categorical Image Dataset” by Hongzhi Li, Joseph G. Ellis, Lei Zhang and Shih-Fu Chang, “Multi-view Collective Tensor Decomposition for Cross-modal Hashing” by Limeng Cui, Jiawei Zhang, Lifang He and Philip S. Yu.
These papers were candidates to either the best paper or the best multimodal paper award at the ICMR2018 conference. As such, they represent the most advanced and timely topics addressed by the multimedia retrieval community. They also highlight the diversity of the field and the richness of the themes addressed by the conference and its attendees, ranging from multimodal representation learning, visual content mining and cross-modal hashing techniques, to mention only a few of the topics of interest. The research works presented in this special issue went through multiple rounds of reviewing by international experts from the field, ensuring that substantial extensions of the articles presented at ICMR2018 were included and that the quality of all manuscripts was very high.
The paper by Mithun et al., “Joint Embeddings with Multimodal Cues for Video-Text Retrieval” received the Best Paper Award at the conference. The authors propose a multimodal model that computes audio-visual embeddings for video-text retrieval. The experiments presented expose state-of-the-art results on the MSVD dataset for the video retrieval task and the caption retrieval task.
In “Mining Exoticism from Visual Content with Fusion-based Deep Neural Networks”, Ceroni et al. explore the novel problem of identifying exotic images. Both automatically learned features and selected handcrafted features are evaluated on this classification task. A new dataset addressing two sub-tasks (generic and concept-specific) is introduced and made available for researchers interesting in this problem.
The work of Li et al. on “Automatic Visual Pattern Mining from Categorical Image Dataset”, presents a method to find discriminative and representative visual patterns to a specific semantic category. They show that the method offers improvement over the state of the art on the task of image classification.
In “Multi-view Collective Tensor Decomposition for Cross-modal Hashing”, Cui et al. address the problem of cross-media retrieval with hash codes learned with a tensor model. Experiments show that the multi-view collective tensor decomposition performs well against several existing cross-modal hashing approaches on two commonly used datasets.
This special issue would not have been possible without the high-quality work of the authors of the accepted paper. We would like to acknowledge the engagement of the various reviewers whom provided constructive comments in order to improve the papers. We are particularly grateful for the assistance of the editorial office of IJMR during the preparation of this special issue.