Skip to main content

Accuracy

  • Reference work entry
Encyclopedia of Machine Learning

Definition

Accuracy refers to a measure of the degree to which the predictions of a model match the reality being modeled. The term accuracy is often applied in the context of classification models. In this context, accuracy = P(λ(X) = Y ), where XY is a joint distribution and the classification model λ is a function X → Y. Sometimes, this quantity is expressed as a percentage rather than a value between 0.0 and 1.0.

The accuracy of a model is often assessed or estimated by applying it to test data for which the labels (Y values) are known. The accuracy of a classifier on test data may be calculated as number of correctly classified objects/total number of objects. Alternatively, a smoothing function may be applied, such as a Laplace estimate or an m-estimate.

Accuracy is directly related to error rate, such that accuracy = 1.0 − error rate (or when expressed as a percentage, accuracy = 100 − error rate).

Cross References

Confusion Matrix

Resubstitution Accuracy

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this entry

Cite this entry

(2011). Accuracy. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30164-8_3

Download citation

Publish with us

Policies and ethics