A series of blogs about the importance and the value of consistent, comprehensive, and pervasive approach to data quality.
Poor data quality costs businesses millions of dollars every year. Depending on which analyst survey you read, the direct cost of poor data quality is between $9.0 million and $15.0 million per year. The business impact of this poor data quality ranges from day-to-day failures—such as poor customer service, inefficient marketing processes, and supply chain errors—to major operational failures, such as breakdowns in customer and product hubs or failed digital transformation initiatives. And the costs are not only financial—data quality can lead to loss of reputation and an inaccurate assessment of risk.
It is difficult to prevent bad data from entering the business—and difficult to prevent it from propagating as it moves throughout the business from application to application. When organizations attempt to address the problem, the response is often tactical in nature and departmental in scope—and, inevitably, unsustainable.
To stem the losses caused by bad data quality, organizations need to establish and invest in the people, processes, and infrastructure required to ensure that relevant and trusted data can be delivered to the business when, where, and however it’s needed.
The most effective business leaders today recognize the role of data governance in their digital transformation efforts, including critical initiatives such as accelerating customer-centricity to increase customer loyalty and revenue, driving new productivity savings from legacy processes and systems, generating market-shaping insights from big data and analytics, and moving workloads to the cloud to reduce TCO and improve security.
In this series of blogs, we’ll take a look at the challenges and obstacles of delivering data quality for everyone, everywhere today. We’ll outline new ways that your organization can overcome those obstacles and discuss the value of extending data quality to include all stakeholders, all data domains, and all applications.
The Business Case for Data Quality
The most effective business leaders today recognize the role of high-quality, trusted data in their digital transformation efforts. However, a survey conducted by McKinsey (1) reported that employee productivity can be reduced by 30 percent (and for analytics teams, this figure can be as high as 50 percent) because of poor data quality and availability. And as organizations increase the number of operational units, customers, products, and partners become more geographically dispersed, data quality problems become more pervasive—which leads to poor customer experience, lost opportunities, and failure to comply with regulations.
As more and more studies are published, the business case for data quality becomes more self-evident. Companies that invest in data quality are seeing the benefits: They have greater confidence in their systems, spend less time reconciling data, and are able to deliver a single version of the truth, increased customer satisfaction, and reduced costs. These companies are able to take advantage of their data assets to work faster, better, and smarter to beat the competition. Let’s look at some real-life examples:
- Avis Budget Group reduces business risk by profiling and govern telematics data from vehicle GPS and navigation systems and uncovering any data quality issues early
- AXA attains profitable growth by identifying cross-sell and upsell opportunities for brokers and partners, enabling sales of more insurance products to existing customer base
- Citrix achieved 50% increase in quality of data at the point of entry, and a 50% reduction in the rate of junk and duplicate data
- Elkjøp reduced time to market by up to 60% by decreasing time to onboard new online product information from several hours to only a few minutes
- Rabobank moved bank closer to their objective of achieving 80% self-service and enhanced customer experience by ensuring fewer online applications are terminated by customers
- National Health Plan reduced duplicate patient records by 86%, enabling staff to spend more time analyzing and using patient data instead of extracting, merging, validating, and cleansing data
Without a consistent, comprehensive way to manage data quality across the enterprise, bad data continues to propagate. Confidence in data continues to erode. Costs continue to rise. Your business remains at risk.
The Challenges of Implementing Data Quality Across the Enterprise
All organizations have data quality issues. Unfortunately, data quality issues are often exposed for the first time when departmental barriers are removed, and data moves between applications.
The costs of data quality problems are high. Confidence and trust in data—and in the systems that contain the data—erode as business users get frustrated with incomplete, inconsistent, and inaccurate data. Poor-quality data can result in all kinds of costly problems such as project and reporting delays, missed targets, process errors, compliance issues, and dissatisfied customers. As data requirements expand beyond customer data and more in real time, and as data is shared with users outside the firewall, the potential for data quality problems only increases.
The business can’t solve data quality problems on its own. Line-of-business managers, business analysts, and data stewards need the appropriate tools and processes throughout the data lifecycle. And IT often cannot respond within the time frames the business requires.
In response to these kinds of scenarios, individual departments and business units will frequently implement their own data quality projects. And while these projects may solve an immediate problem or meet an immediate need, this one-off approach has larger implications: For one thing, these individual projects aren’t part of an overall strategy to improve data quality across the enterprise. And another key point is that any data quality rules or artifacts created for an individual project cannot be reused for other projects or applications.
Data Quality Needs to Be Everywhere
It’s no longer sufficient to have one or two tactical data quality initiatives. As data volumes grow, as data requirements increase, and as data flows through new channels, data quality must be addressed at an enterprise level. Data quality must become pervasive.
For data quality to become pervasive:
- More people need to be involved in data quality processes. Data quality needs to be an enterprise-wide endeavor. Everyone—including line-of-business managers, data stewards, analysts, and IT developers—needs to be empowered with the tools they need to take responsibility for the data.
- There needs to be a clear understanding of the negative impact of poor data quality has on your business. Everyone in your organization needs to recognize your data as its most critical business asset. Understanding the value of data, the business and IT needs to become actively involved in, and accountable for, ensuring its quality.
- Data quality needs to extend to all domains and landscapes. Data quality must go beyond name and address to include all data domains such as product, financial, and asset data.
- Common data quality rules need to be deployed to all applications. Low-quality data must be proactively prevented from entering the organization or cleansed data using data services.
- Data quality scorecards need to be published and shared. The entire organization needs to monitor and measure data quality across all projects, processes, and applications.
Implementing data quality for “Everyone, Everywhere” means establishing the organization, processes, and infrastructure necessary to:
- Empower all stakeholders
- Support all data domains and landscape
- Access and deploy common data quality rules to any data, in any source, anywhere
In my next blog I will look at why Data Quality does not currently extend to everyone, everywhere. In the meantime, why not sign up for a Free 30-day Trial of Cloud Data Quality?