A series of blogs about the importance and the value of consistent, comprehensive, and pervasive approach to data quality.
Poor data quality costs businesses millions of dollars every year. Depending on which analyst survey you read, the direct cost of poor data quality is between $9.0 million and $15.0 million per year. The business impact of this poor data quality ranges from day-to-day failures—such as poor customer service, inefficient marketing processes, and supply chain errors—to major operational failures, such as breakdowns in customer and product hubs or failed digital transformation initiatives. And the costs are not only financial—data quality can lead to loss of reputation and an inaccurate assessment of risk.
It is difficult to prevent bad data from entering the business—and difficult to prevent it from propagating as it moves throughout the business from application to application. When organizations attempt to address the problem, the response is often tactical in nature and departmental in scope—and, inevitably, unsustainable.
To stem the losses caused by bad data quality, organizations need to establish and invest in the people, processes, and infrastructure required to ensure that relevant and trusted data can be delivered to the business when, where, and however it’s needed.
The most effective business leaders today recognize the role of data governance in their digital transformation efforts, including critical initiatives such as accelerating customer-centricity to increase customer loyalty and revenue, driving new productivity savings from legacy processes and systems, generating market-shaping insights from big data and analytics, and moving workloads to the cloud to reduce TCO and improve security.
In this series of blogs, we’ll take a look at the challenges and obstacles of delivering data quality for everyone, everywhere today. We’ll outline new ways that your organization can overcome those obstacles and discuss the value of extending data quality to include all stakeholders, all data domains, and all applications.
The most effective business leaders today recognize the role of high-quality, trusted data in their digital transformation efforts. However, a survey conducted by McKinsey (1) reported that employee productivity can be reduced by 30 percent (and for analytics teams, this figure can be as high as 50 percent) because of poor data quality and availability. And as organizations increase the number of operational units, customers, products, and partners become more geographically dispersed, data quality problems become more pervasive—which leads to poor customer experience, lost opportunities, and failure to comply with regulations.
As more and more studies are published, the business case for data quality becomes more self-evident. Companies that invest in data quality are seeing the benefits: They have greater confidence in their systems, spend less time reconciling data, and are able to deliver a single version of the truth, increased customer satisfaction, and reduced costs. These companies are able to take advantage of their data assets to work faster, better, and smarter to beat the competition. Let’s look at some real-life examples:
Without a consistent, comprehensive way to manage data quality across the enterprise, bad data continues to propagate. Confidence in data continues to erode. Costs continue to rise. Your business remains at risk.
All organizations have data quality issues. Unfortunately, data quality issues are often exposed for the first time when departmental barriers are removed, and data moves between applications.
The costs of data quality problems are high. Confidence and trust in data—and in the systems that contain the data—erode as business users get frustrated with incomplete, inconsistent, and inaccurate data. Poor-quality data can result in all kinds of costly problems such as project and reporting delays, missed targets, process errors, compliance issues, and dissatisfied customers. As data requirements expand beyond customer data and more in real time, and as data is shared with users outside the firewall, the potential for data quality problems only increases.
The business can’t solve data quality problems on its own. Line-of-business managers, business analysts, and data stewards need the appropriate tools and processes throughout the data lifecycle. And IT often cannot respond within the time frames the business requires.
In response to these kinds of scenarios, individual departments and business units will frequently implement their own data quality projects. And while these projects may solve an immediate problem or meet an immediate need, this one-off approach has larger implications: For one thing, these individual projects aren’t part of an overall strategy to improve data quality across the enterprise. And another key point is that any data quality rules or artifacts created for an individual project cannot be reused for other projects or applications.
It’s no longer sufficient to have one or two tactical data quality initiatives. As data volumes grow, as data requirements increase, and as data flows through new channels, data quality must be addressed at an enterprise level. Data quality must become pervasive.
For data quality to become pervasive:
Implementing data quality for “Everyone, Everywhere” means establishing the organization, processes, and infrastructure necessary to:
In my next blog I will look at why Data Quality does not currently extend to everyone, everywhere. In the meantime, why not sign up for a Free 30-day Trial of Cloud Data Quality?
Jun 28, 2022
Jun 28, 2022
Jun 17, 2022
Jun 17, 2022