Skip to main content

DeepBIBX: Deep Learning for Image Based Bibliographic Data Extraction

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10635))

Abstract

Extraction of structured bibliographic data from document images of non-native-digital academic content is a challenging problem that finds its application in the automation of cataloging systems in libraries and reference linking domain. The existing approaches discard the visual cues and focus on converting the document image to text and further identifying citation strings using trained segmentation models. Apart from the large training data, which these existing methods require, they are also language dependent. This paper presents a novel approach (DeepBIBX) which targets this problem from a computer vision perspective and uses deep learning to semantically segment the individual citation strings in a document image. DeepBIBX is based on deep Fully Convolutional Networks and uses transfer learning to extract bibliographic references from document images. Unlike existing approaches which use textual content to semantically segment bibliographic references, DeepBIBX utilizes image based contextual information, which makes it applicable to documents of any language. To gauge the performance of the presented approach, a dataset consisting of 286 document images containing 5090 bibliographic references is collected. Evaluation results reveals that the DeepBIBX outperforms state-of-the-art method (ParsCit, 71.7%) for bibliographic references extraction and achieved an accuracy of 84.9% in comparison to 71.7%. Furthermore, in terms of pixel classification task, DeepBIBX achieved a precision and a recall rate of 96.2%, 94.4% respectively.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Councill, I.G., Giles, C.L., Kan, M.Y.: Parscit: an open-source CRF reference string parsing package. In: LREC 2008 (2008)

    Google Scholar 

  2. Tkaczyk, D., Szostek, P., Fedoryszak, M., Dendek, P.J., Bolikowski, Ł.: Cermine: automatic extraction of structured metadata from scientific literature. Int. J. Doc. Anal. Recognit. (IJDAR) 18(4), 317–335 (2015)

    Article  Google Scholar 

  3. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  4. Zhang, X., Li, Z., Loy, C.C., Lin, D.: Polynet: a pursuit of structural diversity in very deep networks. arXiv preprint arXiv:1611.05725 (2016)

  5. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI, pp. 4278–4284 (2017)

    Google Scholar 

  6. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

    Google Scholar 

  7. Caruana, R.: Multitask Learning. In: Thrun, S., Pratt, L. (eds.) Learning to Learn. Springer, Boston (1998)

    Google Scholar 

  8. Everingham, M., Van Gool, L., Williams, C., Winn, J., Zisserman, A.: Pascal visual object classes challenge results 1(6), 7 (2005), www.pascal-network.org

  9. Johnson, R.K.: Special issue: In google’s broad wake: taking responsibility for shaping the global digital library. ARL: A bimonthly report on research library issues and actions from ARL, CNI, and SPARC, vol. 250. Association of Research Libraries (2007)

    Google Scholar 

  10. Crossref labs pdfextract, https://www.crossref.org/labs/pdfextract/

  11. Breuel, T.M.: The ocropus open source OCR system. DRR 6815, 68150 (2008)

    Google Scholar 

  12. Anystyle.io, https://anystyle.io

Download references

Acknowledgements

This work was partially supported by the DFG under contract DE 420/18-1 and by the Swiss National Science Foundation under grant number \(407540\_167320\).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Akansha Bhardwaj .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Bhardwaj, A., Mercier, D., Dengel, A., Ahmed, S. (2017). DeepBIBX: Deep Learning for Image Based Bibliographic Data Extraction. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10635. Springer, Cham. https://doi.org/10.1007/978-3-319-70096-0_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70096-0_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70095-3

  • Online ISBN: 978-3-319-70096-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics