Abstract
In the previous chapters the behavior of classifiers trained to minimize error-entropy risks, for both discrete and continuous errors, was analyzed. The rationale behind the use of these risks is the fact that entropy is a PDF concentration measure — higher concentration implies lower entropy —, and in addition (recalling what was said in Sect. 2.3.1) minimum entropy is attained for Dirac-δ combs (including a single one). Ideally, in supervised classification, one would like to drive the learning process such that the final distribution of the error variable is a Dirac-δ centered at the origin. In rigor, this would only happen for completely separable classes when dealing with the discrete error case or with infinitely distant classes when dealing with the continuous error case with the whole real line as support.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2013 Springer Berlin Heidelberg
About this chapter
Cite this chapter
Marques de Sá, J.P., Silva, L.M.A., Santos, J.M.F., Alexandre, L.A. (2013). EE-Inspired Risks. In: Minimum Error Entropy Classification. Studies in Computational Intelligence, vol 420. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29029-9_5
Download citation
DOI: https://doi.org/10.1007/978-3-642-29029-9_5
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-29028-2
Online ISBN: 978-3-642-29029-9
eBook Packages: EngineeringEngineering (R0)