Advertisement

Coherence-Based Automated Essay Scoring Using Self-attention

  • Xia Li
  • Minping Chen
  • Jianyun Nie
  • Zhenxing Liu
  • Ziheng Feng
  • Yingdan Cai
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11221)

Abstract

Automated essay scoring aims to score an essay automatically without any human assistance. Traditional methods heavily rely on manual feature engineering, making it expensive to extract the features. Some recent studies used neural-network-based scoring models to avoid feature engineering. Most of them used CNN or RNN to learn the representation of the essay. Although these models can cope with relationships between words within a short distance, they are limited in capturing long-distance relationships across sentences. In particular, it is difficult to assess the coherence of the essay, which is an essential criterion in essay scoring. In this paper, we use self-attention to capture useful long-distance relationships between words so as to estimate a coherence score. We tested our model on two datasets (ASAP and a new non-native speaker dataset). In both cases, our model outperforms the existing state-of-the-art models.

Keywords

Self-attention Automated essay scoring Neural networks 

Notes

Acknowledgement

This work is supported by the National Science Foundation of China (61402119) and Special Funds for the Cultivation of Guangdong College Students’ Scientific and Technological Innovation. (“Climbing Program” Special Funds.)

References

  1. 1.
    Alikaniotis, D., Yannakoudakis, H., Rei, M.: Automatic text scoring using neural networks. arXiv preprint arXiv:1606.04289 (2016)
  2. 2.
    Dong, F., Zhang, Y.: Automatic features for essay scoring. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 968–974 (2016)Google Scholar
  3. 3.
    Taghipour, K., Ng, H.T.: A neural approach to automated essay scoring. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1882–1891 (2016)Google Scholar
  4. 4.
    Dong, F., Zhang, Y., Yang, J.: Attention-based recurrent convolutional neural network for automatic essay scoring. In: Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL), pp. 153–162 (2017)Google Scholar
  5. 5.
    Tay, Y., Phan, M., Tuan, L., Hui, S.: SkipFlow: incorporating neural coherence features for end-to-end automatic text scoring. arXiv preprint arXiv:1711.04981 (2017)
  6. 6.
    Halliday, M.A.K., Hasan, R.: Cohesion in English. Longman, London (1976)Google Scholar
  7. 7.
    McNamara, D.S., Kintsch, W.: Learning from texts: effects of prior knowledge and text coherence. Discourse Process. 22(3), 247–288 (1996)CrossRefGoogle Scholar
  8. 8.
    Vaswani, A., et al.: Attention is all you need. In: Neural Information Processing Systems (NIPS), pp. 6000–6100 (2017)Google Scholar
  9. 9.
    Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)Google Scholar
  10. 10.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  11. 11.
    Pascanu, R., Mikolov, T., Bengio, Y.: On the difficulty of training recurrent neural networks. In: Proceedings of International Conference on International Conference on Machine Learning (ICML), pp. 1310–1318 (2013)Google Scholar
  12. 12.
    Dauphin, Y.N., Vries, H.D, Bengio, Y.: Equilibrated adaptive learning rates for non-convex optimization. In: Proceedings of International Conference on Neural Information Processing Systems (NIPS), pp. 1504–1512 (2015)Google Scholar
  13. 13.
    Xu, K., et al.: Show, attend and tell: neural image caption generation with visual attention. In: Proceedings of the 32nd International Conference on Machine Learning (ICML), pp. 77–81 (2015)Google Scholar
  14. 14.
    Li, J., Luong, M.T., Jurafsky, D.: A hierarchical neural autoencoder for paragraphs and documents. arXiv preprint arXiv:1506.01057 (2015)
  15. 15.
    Page, E.B.: Computer grading of student prose, using modern concepts and software. J. Exp. Educ. 62(2), 127–142 (1994)CrossRefGoogle Scholar
  16. 16.
    Landauer, T.K., Foltz, P.W., Laham, D.: An introduction to latent semantic analysis. Discourse Process. 25(2–3), 259–284 (1998)CrossRefGoogle Scholar
  17. 17.
    Foltz, P.W., Laham D., Landauer T.K.: Automated essay scoring: applications to educational technology. In: Proceedings of EdMedia, pp. 40–64 (1999)Google Scholar
  18. 18.
    Larkey, L.S.: Automatic essay grading using text categorization techniques. In: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 90–95 (1998)Google Scholar
  19. 19.
    Rudner, L.M.: Automated essay scoring using Bayes’ theorem. Nat. Counc. Measur. Educ. Orleans La 1(2), 3–21 (2002)Google Scholar
  20. 20.
    Attali, Y., Burstein, J.: Automated essay scoring with e-rater R V. 2.0. ETS Research Report Series, pp. 1–21 (2004)Google Scholar
  21. 21.
    Phandi, P., Chai, K.M.A., Ng, H.T.: Flexible domain adaptation for automated essay scoring using correlated linear regression. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 431–439 (2015)Google Scholar
  22. 22.
    Yannakoudakis, H., Medlock, B., Medloc, B.: A new dataset and method for automatically grading ESOL texts In: Proceedings of the 49th Meeting of the Association for Computational Linguistics (ACL), pp. 180–189 (2011)Google Scholar
  23. 23.
    Zhao, S., Zhang, Y., Xiong, X., Botelho, A., Heffernan, N.: A memory-augmented neural model for automated grading. In: Proceedings of the Fourth ACM Conference on Learning at Scale (L@S), pp. 189–192 (2017)Google Scholar
  24. 24.
    Cheng, J., Dong, L., Lapata, M.: Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733 (2016)
  25. 25.
    Parikh, A.P., Täckström, O., Das, D., Uszkoreit, J.: A decomposable attention model for natural language inference. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2249–2255 (2016)Google Scholar
  26. 26.
    Lin, Z., et al.: A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130 (2017)
  27. 27.
    Shen, T., Zhou, T., Long, G., Jiang, J., Pan, S., Zhang, C.: DiSAN: directional self-attention network for RNN/CNN-free language understanding. arXiv preprint arXiv:1709.04696 (2017)
  28. 28.
    Tan, Z., Wang, M., Xie, J., Chen, Y., Shi, X.: Deep semantic role labeling with self-attention. arXiv preprint arXiv:1712.01586 (2017)
  29. 29.
    Paulus, R., Xiong, C., Socher, R.: A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304 (2017)

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Xia Li
    • 1
    • 2
  • Minping Chen
    • 2
  • Jianyun Nie
    • 3
  • Zhenxing Liu
    • 2
  • Ziheng Feng
    • 2
  • Yingdan Cai
    • 2
  1. 1.Key Laboratory of Language Engineering and ComputingGuangdong University of Foreign StudiesGuangzhouChina
  2. 2.School of Information Science and Technology/School of Cyber SecurityGuangdong University of Foreign StudiesGuangzhouChina
  3. 3.Department of Computer Science and Operations ResearchUniversity of MontrealMontrealCanada

Personalised recommendations