Skip to main content

Lexicalized Beam Thresholding Parsing with Prior and Boundary Estimates

  • Conference paper
Computational Linguistics and Intelligent Text Processing (CICLing 2005)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3406))

Abstract

We use prior and boundary estimates as the approximation of outside probability and establish our beam thresholding strategies based on these estimates. Lexical items, e.g. head word and head tag, are also incorporated to lexicalized prior and boundary estimates. Experiments on the Penn Chinese Treebank show that beam thresholding with lexicalized prior works much better than that with unlexicalized prior. Differentiating completed edges from incomplete edges paves the way for using boundary estimates in the edge-based beam chart parsing. The beam thresholding based on lexicalized prior, combined with unlexicalized boundary, runs faster than that only with lexicalized prior by a factor of 1.5, at the same performance level.

This work was supported in part by National High Technology Research and Development Program under grant #2001AA114010.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Eugene, C.: A maximum-entropy-inspired parser. In: Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics, Seattle (2000)

    Google Scholar 

  2. Collins, M.: Head-Driven Statistical Models for Natural Language Parsing. PhD thesis, University of Pennsylvania (1999)

    Google Scholar 

  3. Bikel, D.M.: Intricacies of Collins’ Parsing Model (2004), http://www.cis.upenn.edu/~dbikel/

  4. Satta, G.: Parsing Techniques for Lexicalized Context-free Grammars. Invited talk in the Sixth International Workshop on Parsing Technologies, Trento, Italy (2000)

    Google Scholar 

  5. Goodman, J.: Global thresholding and multiple-pass parsing. In: Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, pp. 11–25 (1997)

    Google Scholar 

  6. Caraballo, S., Charniak, E.: New figures of merit for best-first probabilistic chart parsing. Computational Linguistics 24(2), 275–298 (1998)

    Google Scholar 

  7. Charniak, E., Goldwater, S., Johnson, M.: Best-first edge-based chart parsing. In: 6th Annual Workshop for Very Large Corpora., pp. 127–133 (1998)

    Google Scholar 

  8. Xia, F.: Automatic Grammar Generation from Two Different Perspectives. PhD thesis, University of Pennsylvania (1999)

    Google Scholar 

  9. Klein, D., Manning, C.D.: Fast Exact Natural Language Parsing with a Factored Model. In: Advances in Neural Information Processing Systems (NIPS 2002), vol. 15 (2002)

    Google Scholar 

  10. Klein, D., Manning, C.D.: Accurate Unlexicalized Parsing. In: Proceedings of the 42th Association for Computational Linguistics (2003)

    Google Scholar 

  11. Levy, R., Manning, C.: Is it harder to parse Chinese, or the Chinese Treebank? In: Proceedings of the 42th Association for Computational Linguistics (2003)

    Google Scholar 

  12. Xue, N., Xia, F.: The Bracketing Guidelines for Chinese Treebank Project. Technical Report IRCS 00-08, University of Pennsylvania (2000)

    Google Scholar 

  13. Bikel, D.M., Chiang, D.: Two statistical parsing models applied to the chinese treebank. In: Proceedings of the Second Chinese Language Processing Workshop, pp. 1–6 (2000)

    Google Scholar 

  14. Chen, S.F., Goodman, J.: An Empirical Study of Smoothing Techniques for Language Modeling. In: Proceedings of the Thirty-Fourth Annual Meeting of the Association for Computational Linguistics

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Xiong, D., Liu, Q., Lin, S. (2005). Lexicalized Beam Thresholding Parsing with Prior and Boundary Estimates. In: Gelbukh, A. (eds) Computational Linguistics and Intelligent Text Processing. CICLing 2005. Lecture Notes in Computer Science, vol 3406. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30586-6_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-30586-6_13

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-24523-0

  • Online ISBN: 978-3-540-30586-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics