Skip to main content

F-Measure as the Error Function to Train Neural Networks

  • Conference paper
Advances in Computational Intelligence (IWANN 2013)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7902))

Included in the following conference series:

Abstract

Imbalance datasets impose serious problems in machine learning. For many tasks characterized by imbalanced data, the F-Measure seems more appropiate than the Mean Square Error or other error measures. This paper studies the use of F-Measure as the training criterion for Neural Networks by integrating it in the Error-Backpropagation algorithm. This novel training criterion has been validated empirically on a real task for which F-Measure is typically applied to evaluate the quality. The task consists in cleaning and enhancing ancient document images which is performed, in this work, by means of neural filters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dembczyński, K., Waegeman, W., Cheng, W., Hüllermeier, E.: An exact algorithm for f-measure maximization. Advances in Neural Information Processing Systems 24, 223–230 (2011)

    Google Scholar 

  2. Al-Haddad, L., Morris, C.W., Boddy, L.: Training radial basis function neural networks: effects of training set size and imbalanced training sets. J. of Microbiological Methods 43(1), 33–44 (2000)

    Article  Google Scholar 

  3. Bilmes, J., Asanovic, K., Chin, C.W., Demmel, J.: Using PHiPAC to speed error back-propagation learning. In: Proc. of ICASSP, vol. 5, pp. 4153–4156 (1997)

    Google Scholar 

  4. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. Wiley (2001)

    Google Scholar 

  5. Gatos, B., Ntirogiannis, K., Pratikakis, I.: ICDAR 2009 document image binarization contest (DIBCO 2009). In: Proc. of ICDAR, pp. 1375–1382 (2009)

    Google Scholar 

  6. Gatos, B., Ntirogiannis, K., Pratikakis, I.: DIBCO 2009: document image binarization contest. Int. J. on Document Analysis and Recognition 14(1), 35–44 (2011)

    Article  Google Scholar 

  7. Hidalgo, J.L., España, S., Castro, M.J., Pérez, J.A.: Enhancement and cleaning of handwritten data by using neural networks. In: Marques, J.S., Pérez de la Blanca, N., Pina, P. (eds.) IbPRIA 2005. LNCS, vol. 3522, pp. 376–383. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  8. Jansche, M.: Maximum expected f-measure training of logistic regression models. In: Proc. of HLT & EMNLP, pp. 692–699 (2005)

    Google Scholar 

  9. Musicant, D.R., Kumar, V., Ozgur, A.: Optimizing f-measure with support vector machines. In: Proc. of Int. Florida AI Research Society Conference, pp. 356–360 (2003)

    Google Scholar 

  10. Ntirogiannis, K., Gatos, B., Pratikakis, I.: A Performance Evaluation Methodology for Historical Document Image Binarization (2012)

    Google Scholar 

  11. Pratikakis, I., Gatos, B., Ntirogiannis, K.: ICFHR 2012 Competition on Handwritten Document Image Binarization (H-DIBCO 2012) (2012)

    Google Scholar 

  12. Pratikakis, I., Gatos, B., Ntirogiannis, K.: H-DIBCO 2010-handwritten document image binarization competition. In: Proc. of ICFHR, pp. 727–732 (2010)

    Google Scholar 

  13. van Rijsbergen, C.J.: A theoretical basis for the use of co-occurrence data in information retrieval. J. of Documentation 33(2), 106–119 (1977)

    Article  Google Scholar 

  14. Wolf, C.: Document Ink Bleed-Through Removal with Two Hidden Markov Random Fields and a Single Observation Field. IEEE PAMI 32(3), 431–447 (2010)

    Article  Google Scholar 

  15. Zhou, Z.H., Liu, X.Y.: Training cost-sensitive neural networks with methods addressing the class imbalance problem. IEEE Trans. on Knowledge and Data Engineering 18(1), 63–77 (2006)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Pastor-Pellicer, J., Zamora-Martínez, F., España-Boquera, S., Castro-Bleda, M.J. (2013). F-Measure as the Error Function to Train Neural Networks. In: Rojas, I., Joya, G., Gabestany, J. (eds) Advances in Computational Intelligence. IWANN 2013. Lecture Notes in Computer Science, vol 7902. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-38679-4_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-38679-4_37

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-38678-7

  • Online ISBN: 978-3-642-38679-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics