Chinese Image Character Recognition Using DNN and Machine Simulated Training Samples

  • Jinfeng Bai
  • Zhineng Chen
  • Bailan Feng
  • Bo Xu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8681)


Inspired by the success of deep neural network (DNN) models in solving challenging visual problems, this paper studies the task of Chinese Image Character Recognition (ChnICR) by leveraging DNN model and huge machine simulated training samples. To generate the samples, clean machine born Chinese characters are extracted and are plus with common variations of image characters such as changes in size, font, boldness, shift and complex backgrounds, which in total produces over 28 million character images, covering the vast majority of occurrences of Chinese character in real life images. Based on these samples, a DNN training procedure is employed to learn the appropriate Chinese character recognizer, where the width and depth of DNN, and the volume of samples are empirically discussed. Parallel to this, a holistic Chinese image text recognition system is developed. Encouraging experimental results on text from 13 TV channels demonstrate the effectiveness of the learned recognizer, from which significant performance gains are observed compared to the baseline system.


Chinese Image Character Recognition Deep Neural Network Image Text Video Text 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Lew, et al.: Content-based multimedia information retrieval: State of the art and challenges. TOMCCAP 02(1), 1–19 (2006)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Bai, J., et al.: Chinese image text recognition on grayscale pixels. In: ICASSP (2014)Google Scholar
  3. 3.
    Karatzas, et al.: Icdar 2011 robust reading competition - challenge 1: Reading text in born-digital images (web and email). In: ICDAR, pp. 1485–1490 (2011)Google Scholar
  4. 4.
    Shahab, et al.: Icdar 2011 robust reading competition challenge 2: Reading text in scene images. In: ICDAR, pp. 1491–1496 (2011)Google Scholar
  5. 5.
    Karatzas: Icdar 2013 robust reading competition. In: ICDAR, pp. 1484–1493 (2013)Google Scholar
  6. 6.
    ABBYY Finereader 9.0,
  7. 7.
    Cireşan, et al.: Flexible, high performance convolutional neural networks for image classification. In: IJCAI 2011, pp. 1237–1242. AAAI Press (2011)Google Scholar
  8. 8.
    Krizhevsky, A., et al.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (2012)Google Scholar
  9. 9.
    Ciresan, et al.: Multi-column deep neural networks for image classification. In: CVPR, pp. 3642–3649 (2012)Google Scholar
  10. 10.
    Liu, C.-L., et al.: Handwritten chinese character recognition: Alternatives to nonlinear normalization. In: ICDAR, vol. 3, pp. 524–528 (2003)Google Scholar
  11. 11.
    Liu, C.-L.: Normalization-cooperated gradient feature extraction for handwritten character recognition. TPAMI 29(8), 1465–1469 (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Jinfeng Bai
    • 1
  • Zhineng Chen
    • 1
  • Bailan Feng
    • 1
  • Bo Xu
    • 1
  1. 1.Interactive Digital Media Technology Research Center, Institute of AutomationChinese Academy of SciencesBeijingChina

Personalised recommendations