Advertisement

A Comparative Study on the Classification Performance of Machine Learning Models for Academic Full Texts

  • Haotian Hu
  • Sanhong Deng
  • Haoxiang Lu
  • Dongbo WangEmail author
Conference paper
  • 182 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12051)

Abstract

[Objectives] The study aims to compare the classification performance of various machine learning models, explore the classification effects of traditional machine learning models and deep learning models, solve the problem of missing category information of chapter structure in academic literature, promote the retrieval of the content of the specified chapter structure in the academic literature, and automatically extract and customize the formation of specific text services. [Methodology] 31,888 academic articles in the journal “PLOS ONE” were selected. After data cleaning and segmentation, a text classification corpus containing 313,952 chapter structure category information was constructed. Based on traditional machine learning models NB, SVM, CRF, and the deep learning model RNN model group, Bi-LSTM model group, IDCNN model group, BERT model group, a total of 17 machine learning models were used to carry out chapter structure division experiment. [Results] Among the classification tasks, the BERT-Bi-LSTM-CRF model has the best classification performance, with an average F value of 71.18%, which is 0.51% and 3.31% higher than the second CRF and the third Bi-LSTM-CRF, respectively. For deep learning models, the use of BERT for text representation is better than word2vec. Adding the Attention mechanism and replacing the Softmax layer with the CRF layer can achieve better classification results. In addition, the online version of the Chapter Structure Recognition Presentation and Application Platform has been developed, which can visually display the overall situation of the research and the model training process, and can realize machine learning and deep learning models such as NB, SVM, CRF, Bi-LSTM, IDCNN. The models can perform online recognition application of chapter structure.

Keywords

Machine learning Deep learning BERT Chapter structure Classification 

References

  1. 1.
    Cao, Y., Liu, J., Cao, B., Shi, M., Wen, Y., Peng, Z.: Web Services classification with topical attention based Bi-LSTM. In: Wang, X., Gao, H., Iqbal, M., Min, G. (eds.) CollaborateCom 2019. LNICST, vol. 292, pp. 394–407. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-30146-0_27CrossRefGoogle Scholar
  2. 2.
    Ding, Z., Xia, R., Yu, J., Li, X., Yang, J.: Densely connected bidirectional LSTM with applications to sentence classification. In: Zhang, M., Ng, V., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2018. LNCS (LNAI), vol. 11109, pp. 278–287. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-99501-4_24CrossRefGoogle Scholar
  3. 3.
    Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)
  4. 4.
    Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 (2014)
  5. 5.
    Xian-yan, M., Rong-yi, C., Ya-hui, Z., Zhenguo, Z.: Multilingual short text classification based on LDA and BiLSTM-CNN neural network. In: Ni, W., Wang, X., Song, W., Li, Y. (eds.) WISA 2019. LNCS, vol. 11817, pp. 319–323. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-30952-7_32CrossRefGoogle Scholar
  6. 6.
    Lai, S., Xu, L., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. AAAI 2015, pp. 2267–2273. AAAI Press (2015)Google Scholar
  7. 7.
    Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995).  https://doi.org/10.1007/BF00994018CrossRefzbMATHGoogle Scholar
  8. 8.
    Pham, T.-H., Le-Hong, P.: End-to-End recurrent neural network models for vietnamese named entity recognition: word-level vs. character-level. In: Hasida, K., Pa, W.P. (eds.) PACLING 2017. CCIS, vol. 781, pp. 219–232. Springer, Singapore (2018).  https://doi.org/10.1007/978-981-10-8438-6_18CrossRefGoogle Scholar
  9. 9.
    Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
  10. 10.
    Vaswani, A., et al.: Attention is all you need. In: Guyon, I., et al. (eds.) NIPS, pp. 6000–6010 (2017)Google Scholar
  11. 11.
    Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., Dyer, C.: Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 (2016)
  12. 12.
    Huang, Z., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015)
  13. 13.
    Zhang, K., Ren, W., Zhang, Y.: Attention-based Bi-LSTM for Chinese named entity recognition. In: Hong, J.-F., Su, Q., Wu, J.-S. (eds.) CLSW 2018. LNCS (LNAI), vol. 11173, pp. 643–652. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-04015-4_56CrossRefGoogle Scholar
  14. 14.
    Strubell, E., Verga, P., Belanger, D., McCallum, A.: Fast and accurate entity recognition with iterated dilated convolutions. arXiv preprint arXiv:1702.02098 (2017)
  15. 15.
    Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Nanjing UniversityNanjingPeople’s Republic of China
  2. 2.Nanjing Agricultural UniversityNanjingPeople’s Republic of China
  3. 3.KU LeuvenLeuvenBelgium

Personalised recommendations