Advertisement

Jointly Learning Text and Visual Information in the Scientific Domain

  • Jose Manuel Gomez-Perez
  • Ronald Denaux
  • Andres Garcia-Silva
Chapter
  • 63 Downloads

Abstract

In this chapter we address multi-modality in domains where not only text but also images or, as we will see next, scientific figures and diagrams are important sources of information for the task at hand. Compared to natural images, understanding scientific figures is particularly hard for machines. However, there is a valuable source of information in scientific literature that until now remained untapped: the correspondence between a figure and its caption. In this chapter we show what can be learnt by looking at a large number of figures and reading their captions, and describe a figure-caption correspondence learning task that makes use of such observation. Training visual and language networks without supervision other than pairs of unconstrained figures and captions is shown to successfully solve this task. We also follow up on previous chapters and illustrate how transferring lexical and semantic knowledge from a knowledge graph significantly enriches the resulting features. Finally, the positive impact of such hybrid and semantically enriched features is demonstrated in two transfer learning experiments involving scientific text and figures: multi-modal classification and machine comprehension for question answering.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Jose Manuel Gomez-Perez
    • 1
  • Ronald Denaux
    • 1
  • Andres Garcia-Silva
    • 1
  1. 1.Expert SystemMadridSpain

Personalised recommendations