Data scrubbing refers to the task of first identifying data that are corrupted, incomplete, invalid, missing, inconsistent, outdated, duplicated, or irrelevant and then either correcting or removing such “dirty” data. The aim of data scrubbing is to make data more accurate, more complete, and consistent both within and across different tables in a database or data warehouse.
An important challenge of data scrubbing is that “dirty” values do not necessarily contradict any database requirements, i.e., such values are consistent with the design of a database and its schema. Rather, errors occur at a higher conceptual level. Examples include credit card numbers that follow a correct grouping of four-times-four digits but that are invalid with regard to a check-sum algorithm, or addresses that have a valid zipcode value that is inconsistent with the town and state names in the same record. Such errors can occur because of a lack of checks and validation...
- 2.Christen P. Data matching – concepts and techniques for record linkage, entity resolution, and duplicate detection, Data-centric systems and applications. Berlin: Springer; 2012.Google Scholar
- 4.Lee Y, Pipino L, Funk J, Wang R. Journey to data quality. Cambridge, MA: The MIT Press; 2009.Google Scholar
- 6.Rahm E, Do HH. Data cleaning: problems and current approaches. IEEE Data Eng Bull. 2000;23(4):3–13.Google Scholar