Advertisement

HanFont: large-scale adaptive Hangul font recognizer using CNN and font clustering

  • Jinhyeok Yang
  • Heebeom Kim
  • Hyobin Kwak
  • Injung KimEmail author
Original Paper
  • 36 Downloads

Abstract

We propose a large-scale Hangul font recognizer that is capable of recognizing 3300 Hangul fonts. Large-scale Hangul font recognition is a challenging task. Typically, Hangul fonts are distinguished by small differences in detailed shapes, which are often ignored by the recognizer. There are additional issues in practical applications, such as the existence of almost indistinguishable fonts and the release of new fonts after the training of the recognizer. Only a few recently developed font recognizers are scalable enough to recognize thousands of fonts, most of which focus on the fonts for western languages. The proposed recognizer, HanFont, is composed of a convolutional neural network (CNN) model designed to effectively distinguish the detailed shapes. HanFont also contains a font clustering algorithm to address the issues caused by indistinguishable fonts and untrained new fonts. In the experiments, HanFont exhibits a recognition rate of 94.11% for 3300 Hangul fonts including numerous similar fonts, which is 2.49% higher than that of ResNet. The cluster-level recognition accuracy of HanFont was 99.47% when the 3300 fonts were grouped into 1000 clusters. In a test on 100 new fonts without retraining the CNN model, HanFont exhibited 57.87% accuracy. The average accuracy for the top 56 untrained fonts was 75.76%.

Keywords

Font recognition Font clustering Large-scale classification Convolutional neural networks Deep learning 

Notes

Acknowledgements

This work was supported by the Cultural Technology R&D Program funded by the Ministry of Culture, Sports and Tourism and Korea Creative Content Agency. This work was supported by the National Program for Excellence in Software funded by the Ministry of Science and ICT, Republic of Korea (2017000130).

References

  1. 1.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  2. 2.
    Lin, M., Chen, Q., Yan, S.: Network in network (2013). arXiv preprint arXiv:1312.4400
  3. 3.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556
  4. 4.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)Google Scholar
  5. 5.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  6. 6.
    Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)Google Scholar
  7. 7.
    Chen, Y., Li, J., Xiao, H., Jin, X., Yan, S., Feng, J.: Dual path networks. In: Advances in Neural Information Processing Systems, pp. 4467–4475 (2017)Google Scholar
  8. 8.
    Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)Google Scholar
  9. 9.
    Wang, Z., Yang, J., Jin, H., Shechtman, E., Agarwala, A., Brandt, J., Huang, T.S.: Deepfont: identify your font from an image. In: Proceedings of the 23rd ACM International Conference on Multimedia, pp. 451–459. ACM (2015)Google Scholar
  10. 10.
    Park, M.H., Woo, S.Y., Kim, S.T., NamKung, J.C.: The font recognition of printed hangul documents. KIPS J. 4(8), 2017–2024 (1997)Google Scholar
  11. 11.
    Lee, J.G., Chung, Y.S., Kim, D.S.: Recognition method for large number of Korean font using deep learning. In: Proceedings of the KICS, pp. 154–155 (2017)Google Scholar
  12. 12.
    Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 807–814 (2010)Google Scholar
  13. 13.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)Google Scholar
  14. 14.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)Google Scholar
  15. 15.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift (2015). arXiv preprint arXiv:1502.03167
  16. 16.
    Huang, S., Zhong, Z., Jin, L., Zhang, S., Wang, H.: Dropregion training of inception font network for high-performance Chinese font recognition. Pattern Recognit. 77, 395–411 (2018)CrossRefGoogle Scholar
  17. 17.
    Kim, I.J., Xie, X.: Handwritten hangul recognition using deep convolutional neural networks. Int. J. Doc. Anal. Recognit. (IJDAR) 18(1), 1–13 (2015)CrossRefGoogle Scholar
  18. 18.
    Kim, I.J., Choi, C., Lee, S.H.: Improving discrimination ability of convolutional neural networks by hybrid learning. Int. J. Doc. Anal. Recognit. (IJDAR) 19(1), 1–9 (2016)CrossRefGoogle Scholar
  19. 19.
    Ranjan, R., Patel, V.M., Chellappa, R.: Hyperface: a deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE Trans. Pattern Anal. Mach. Intell. 41(1), 121–135 (2019)CrossRefGoogle Scholar
  20. 20.
    Springenberg, J.T., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: the all convolutional net (2014). arXiv preprint arXiv:1412.6806
  21. 21.
    Veit, A., Wilber, M.J., Belongie, S.: Residual networks behave like ensembles of relatively shallow networks. In: Advances in Neural Information Processing Systems, pp. 550–558 (2016)Google Scholar
  22. 22.
    Masci, J., Meier, U., Cireşan, D., Schmidhuber, J.: Stacked convolutional auto-encoders for hierarchical feature extraction. In: International Conference on Artificial Neural Networks, pp. 52–59. Springer (2011)Google Scholar
  23. 23.
    Dumoulin, V., Visin, F.: A guide to convolution arithmetic for deep learning (2016). arXiv preprint arXiv:1603.07285
  24. 24.
    Hartigan, J.A., Wong, M.A.: Algorithm as 136: a \(k\)-means clustering algorithm. J. R. Stat. Soc. Ser. C (Appl. Stat.) 28(1), 100–108 (1979)zbMATHGoogle Scholar
  25. 25.
    Rokach, L., Maimon, O.: Data Mining and Knowledge Discovery Handbook, 1st edn, pp. 321–352. Springer (2005) Google Scholar
  26. 26.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization (2014). arXiv preprint arXiv:1412.6980

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of CSEEHandong Global UniversityPohangRepublic of Korea

Personalised recommendations