Skip to main content

Part of the book series: Studies in Computational Intelligence ((SCI,volume 420))

  • 866 Accesses

Abstract

In the previous chapters the behavior of classifiers trained to minimize error-entropy risks, for both discrete and continuous errors, was analyzed. The rationale behind the use of these risks is the fact that entropy is a PDF concentration measure — higher concentration implies lower entropy —, and in addition (recalling what was said in Sect. 2.3.1) minimum entropy is attained for Dirac-δ combs (including a single one). Ideally, in supervised classification, one would like to drive the learning process such that the final distribution of the error variable is a Dirac-δ centered at the origin. In rigor, this would only happen for completely separable classes when dealing with the discrete error case or with infinitely distant classes when dealing with the continuous error case with the whole real line as support.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joaquim P. Marques de Sá .

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Berlin Heidelberg

About this chapter

Cite this chapter

Marques de Sá, J.P., Silva, L.M.A., Santos, J.M.F., Alexandre, L.A. (2013). EE-Inspired Risks. In: Minimum Error Entropy Classification. Studies in Computational Intelligence, vol 420. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29029-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-29029-9_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-29028-2

  • Online ISBN: 978-3-642-29029-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics