When implementing a Data Governance Framework, one of the most critical components to include is a Data Quality Issue Resolution process. Think of this as the backbone for managing data quality – without it, you’re left with chaos and inconsistent approaches. A central process ensures that everyone knows how to flag issues and follow a clear path to investigate and resolve them.
At the heart of this process lies the Data Quality Issue Log. This isn’t just a spreadsheet or system – it’s a tool for tracking, investigating, and reporting on data quality issues.
But what should go into such a log? Let’s break it down and explore the key elements to include, along with why they matter.

Source: Improving Administrative Data Quality for Research and Analysis
Key Elements of a Data Quality Issue Log
1. ID: Numbering the Issues
Every issue needs a unique identifier, typically a simple sequential number (001, 002, 003, etc.). This keeps things straightforward and makes it easy to answer questions like, “How many issues have been identified so far?” This approach works well in Excel, but if you’re using an existing system like a Helpdesk or Operational Risk tool, it’s best to follow their protocols.
2. Date Raised: Keeping Timelines in Check
Tracking when an issue is raised is essential for monitoring resolution times and identifying bottlenecks. A quick tip: always stick to a consistent date format. It’s a small detail, but inconsistent formats can make a log look messy and harder to analyze.
3. Raised By: Who Found the Issue?
This field records the name and department of the person who flagged the issue. Over time, patterns will emerge, showing who your key data users are. These are often the people closest to the data, making them valuable allies in improving quality. Plus, knowing who raised an issue helps when following up or providing progress updates.
4. Short Name of Issue: A Quick Reference
While not always essential, including a short name for each issue makes communication much easier. For example, referring to “Duplicate Customer Issue” is far clearer than using a cryptic ID like “Issue 067” or a long-winded description. It’s especially useful when presenting updates or discussing issues with stakeholders.
5. Detailed Description: The Full Picture
Every issue needs a detailed explanation. This is where all the context and specifics go, helping anyone who reviews the log understand what’s happening. It’s the foundation for investigations and remediation efforts.
6. Impact: Prioritizing Efforts
Not all issues are created equal, so understanding their impact is crucial for prioritization. A simple classification system like High, Medium, and Low works well, but it’s important to define these categories in clear business terms. For instance, is “High” reserved for issues affecting critical systems or entire departments? Clear definitions avoid confusion and help focus resources on the most pressing problems.
7. Data Owner: Assigning Responsibility
Without a clear owner, issues can easily fall through the cracks. This field ensures that someone is accountable for investigating and resolving the problem. While the Data Governance Team supports the process, the Data Owner takes the lead in driving solutions.
8. Status: Tracking Progress
A status field is key for monitoring where issues stand. While “Open” and “Closed” are standard options, consider adding others like “Accepted” for issues that can’t currently be resolved but need to be tracked for future action. This approach helps maintain clarity and ensures no issue is forgotten.
9. Updates and Target Resolution Date: Staying on Track
Regular updates keep the log dynamic, showing progress and outlining next steps. Including a target resolution date is also helpful, as it creates a clear timeline and helps prioritize efforts.

Why This Matters?
A well-structured Data Quality Issue Log empowers teams to manage and improve data quality. It provides clarity, accountability, and a way to measure progress over time. While starting with a simple Excel-based system is a great first step, managing increasing volumes of issues may require more advanced tools to avoid inefficiencies.
For those tackling growing complexity, tools like DQLog simplify the process and eliminate the headache of managing logs manually. Use spreadsheets or specialized software with consistency, clarity, and a focus on actionable insights.
By keeping the process straightforward and the log tailored to specific needs, organizations can turn data quality issues into opportunities for improvement, ultimately building trust and reliability in their data assets.
Disclaimer: This article was authored by Nicola Askham, Data Governance Coach at Nicola Askham Ltd.
Add comment