Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu

Privacy Metrics

  • Chris CliftonEmail author
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_272


Privacy measures


Measures to determine the susceptibility of data or a dataset to revealing private information. Measures include ability to link private data to an individual, the level of detail or correctness of sensitive information, background information needed to determine private information, etc.

Historical Background

Legal definitions of privacy are generally based on the concept of Individually Identifiable Data. Unfortunately, this concept does not have a clear meaning in the context of many database privacy technologies. The official statistics (census) community has long been concerned with measures for privacy, particularly in the contexts of microdata sets (datasets that represent real data, but obscured in ways to protect privacy) and tabular datasets. Measures have largely been based on the probability that a specific value belongs to a given individual, given the disclosed data. As technologies have been developed to anonymize and analyze private...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Agrawal D, Aggarwal CC. On the design and quantification of privacy preserving data mining algorithms. In: Proceedings of the 20th ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems; 2001. p. 247–55.Google Scholar
  2. 2.
    Agrawal R, Srikant R. Privacy-preserving data mining. In: Proceedings of the ACM SIGMOD International Conference on Management of Data; 2000. p. 439–50.Google Scholar
  3. 3.
    Li N, Li T. T-closeness: privacy beyond k-anonymity and l-diversity. In: Proceedings of the 23rd International Conference on Data Engineering; 2007.Google Scholar
  4. 4.
    Machanavajjhala A, Gehrke J, Kifer D, Venkitasubramaniam M. l-diversity: privacy beyond k-anonymity. ACM Trans Knowl Discov Data. 2007;1(1): No.3.Google Scholar
  5. 5.
    Nergiz M, Atzori M, Clifton C. Hiding the presence of individuals from shared databases. In: Proceedings of the ACM SIGMOD International Conference on Management of Data; 2007. p. 665–76.Google Scholar
  6. 6.
    Nergiz ME, Clifton C. Thoughts on k-anonymization. Data Knowl Eng. 2007;63(3):622–45.CrossRefGoogle Scholar
  7. 7.
    Øhrn A, Ohno-Machado L. Using boolean reasoning to anonymize databases. Artif Intell Med. 1999;15(3):235–54.CrossRefGoogle Scholar
  8. 8.
    Sweeney L. Achieving k-anonymity privacy protection using generalization and suppression. Int J Uncertainty Fuzziness Knowledge Based Syst. 2002;10(5):557–70.MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer SciencePurdue UniversityWest LafayetteUSA

Section editors and affiliations

  • Chris Clifton
    • 1
  1. 1.Dept. of Computer SciencePurdue UniversityWest LafayetteUSA