Advertisement

Improving Clinical Named Entity Recognition with Global Neural Attention

  • Guohai Xu
  • Chengyu Wang
  • Xiaofeng HeEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10988)

Abstract

Clinical named entity recognition (NER) is a foundational technology to acquire the knowledge within the electronic medical records. Conventional clinical NER methods suffer from heavily feature engineering. Besides, these methods treat NER as a sentence-level task and ignore the long-range contextual dependencies. In this paper, we propose an attention-based neural network architecture to leverage document-level global information to alleviate the problem. The global information is obtained from document represented by pre-trained bidirectional language model (Bi-LM) with neural attention. The parameters of pre-trained Bi-LM which makes use of unlabeled data can be transferred to NER model to further improve the performance. We evaluate our model on 2010 i2b2/VA datasets to verify the effectiveness of leveraging global information and transfer strategy. Our model outperforms previous state-of-the-art method with less labeled data and no feature engineering.

Keywords

Clinical named entity recognition Neural attention Language model 

Notes

Acknowledgments

This work was supported by the National Key Research and Development Program of China under Grant No. 2016YFB1000904.

References

  1. 1.
    Boag, W., Wacome, K., Naumann, T., Rumshisky, A.: CliNER: a lightweight tool for clinical named entity recognition. AMIA Joint Summits on Clinical Research Informatics (poster) (2015)Google Scholar
  2. 2.
    Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: Proceedings of the 3rd International Conference on Learning Representations (2015)Google Scholar
  3. 3.
    Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12, 2493–2537 (2011)zbMATHGoogle Scholar
  4. 4.
    Chiu, J.P.C., Nichols, E.: Named entity recognition with bidirectional LSTM-CNNs. In: Proceedings of TACL, pp. 357–370 (2016)Google Scholar
  5. 5.
    Chalapathy, R., Borzeshi, E.Z., Piccardi, M.: Bidirectional LSTM-CRF for clinical concept extraction. In: Proceedings of the Clinical Natural Language Processing Workshop ClinicalNLP, pp. 7–12 (2016)Google Scholar
  6. 6.
    de Bruijn, B., Kiritchenko, C.C., Martin, J.D., Zhu, X.D.: Machine-learned solutions for three stages of clinical information extraction: the state of the art at i2b2 2010. J. Am. Med. Inf. Assoc. 18(5), 557–562 (2011)CrossRefGoogle Scholar
  7. 7.
    Finkel, J.R., Grenager, T., Mannning, C.D.: Incorporating non-local information into information extraction systems by Gibbs sampling. In: Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pp. 363–370 (2005)Google Scholar
  8. 8.
    Fu, X., Ananiadou, S.: Improving the extraction of clinical concepts from clinical records. In: Proceedings of the 4th Workshop on Building and Evaluating Resources for Health and Biomedical Text Processing (2014)Google Scholar
  9. 9.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)Google Scholar
  10. 10.
    Huang, Z.H., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991 (2015)Google Scholar
  11. 11.
    Jonnalagadda, S., Cohen, T., Wu, S.T., Gonzalez, G.: Enhancing clinical concept extraction with distributional semantics. J. Biomed. Inf. 45(1), 129–140 (2012)CrossRefGoogle Scholar
  12. 12.
    Johnson, A.E., Pollard, T.J., Shen, L., Li-wei, H.L., Feng, M., Ghassemi, M., Moody, B., Szolovits, P., Celi, L.A., Mark, R.G.: MIMIC-III, a freely accessible critical care database. Sci. Data 3, 160035 (2016)CrossRefGoogle Scholar
  13. 13.
    Krishnan, V., Manning, C.D.: An effective two-stage model for exploiting non-local dependencies in named entity recognition. In: Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pp. 1121–1128 (2006)Google Scholar
  14. 14.
    Lafferty, J.D., McCallum, A., Pereira, P.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: Proceedings of the 18th International Conference on Machine Learning, pp. 282–289 (2001)Google Scholar
  15. 15.
    Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., Dyer, C.: Neural architectures for named entity recognition. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 260–270 (2016)Google Scholar
  16. 16.
    Liu, F., Baldwin, T., Cohn, T.: Capturing long-range contextual dependencies with memory-enhanced conditional random fields. In: Proceedings of the Eighth International Joint Conference on Natural Language Processing, pp. 555–565 (2017)Google Scholar
  17. 17.
    Lai, S.W., Xu, L.H., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 2267–2273 (2015)Google Scholar
  18. 18.
    Luo, L., Yang, Z.H., Yang, P., Zhang, Y., Wang, L., Lin, H.F., Wang, J.: An attention-based BiLSTM-CRF approach to document-level chemical named entity recognition. Bioinformatics 1, 8 (2017)Google Scholar
  19. 19.
    Luong, T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1412–1421 (2015)Google Scholar
  20. 20.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Proceedings of the 26th Annual Conference on Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)Google Scholar
  21. 21.
    Ma, X.Z., Hovy, E.H.: End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 1064–1074 (2016)Google Scholar
  22. 22.
    McCann, B., Bradbury, J., Xiong, C.M., Socher, R.: Learned in translation: contextualized word vectors. In: Proceedings of the 30th Annual Conference on Advances in Neural Information Processing Systems, pp. 6297–6308 (2017)Google Scholar
  23. 23.
    Penningto, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1532–1543 (2014)Google Scholar
  24. 24.
    Peters, M.E., Ammar, W., Bhagavatula, C., Power, R.: Semi-supervised sequence tagging with bidirectional language models. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 1756–1765 (2017)Google Scholar
  25. 25.
    Uzuner, O., South, B.R., Shen, S.Y., DuVall, S.L.: 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. J. Am. Med. Inf. Assoc. 18(5), 552–556 (2011)CrossRefGoogle Scholar
  26. 26.
    Wu, Y.H., Xu, J., Jiang., M., Zhang., Y.Y., Xu, H.: A study of neural word embeddings for named entity recognition in clinical text. In: Proceedings of the 2015 American Medical Informatics Association Annual Symposium, pp. 1326–1333 (2015)Google Scholar
  27. 27.
    Yang, Z.L., Salakhutdinov, R., Cohen, W.W.: Transfer learning for sequence tagging with hierarchical recurrent networks. In: Proceedings of the 5th International Conference on Learning Representations (2017)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Computer Science and Software EngineeringEast China Normal UniversityShanghaiChina

Personalised recommendations