The techniques that exist within the organization for collecting, presenting, and validating metrics must be evaluated in preparation for automating selected repeatable processes.
Cataloging existing measurements and qualifying their relevance helps to filter out processes that do not provide business value as well as reducing potential duplication of effort in measuring and monitoring of critical data quality metrics. Surviving
measurements of relevant metrics are to be collected and presented in a hierarchical manner within a scorecard, reflecting the ways that individual metrics roll up into higher level characterizations of compliance with expectations while allowing for drill-down to isolate the source of specific issues.
As is shown in Figure 1, collecting the measurements for a data quality scorecard would incorporate:
1. Standardizing business processes for automatically populating selected metrics into a common repository
2. Collecting requirements for an appropriate level of design for a data model for capturing data quality metrics
3. Standardizing a reporting template for reporting and presenting data quality metrics
4. Automating the extraction of metric data from the repository
5. Automating the population of the reporting and presentation template, or a data quality scorecard
Figure 1: