Lorem ipsum dolor sit . Proin gravida nibh vel vealiquete sollicitudin, lorem quis bibendum auctonisilin sequat. Nam nec tellus a odio tincidunt auctor ornare.

Stay Connected & Follow us

What are you looking for?

Simply enter your keyword and we will help you find what you need.

Bad data is an unnecessary overhead cost

As the cost associated with bad data rises, companies need to find the optimal qualify level to gain the best ‘bang for their buck’ from their available data!

It is no secret that companies are storing increasingly large amounts of data. In fact, the International Data Corporation (IDC) has forecasted that the global datasphere will grow from 33 Zettabytes (ZB) in 2018 to 175 ZB by 2025 – and that over 22 ZB of storage capacity must be created across all media types during this same period to keep up with the storage demands. 

As companies continue to look at and use data to derive value from it to make more informed strategic and operational decisions, it is no surprise that 81% of marketers expect the majority of their decisions to be data-driven in 2020. Yet, while most companies around the world say that data supports their business decisions, only 44% actually trust their data to make important business decisions. In our experience, companies often neglect to undertake proper data quality maintenance, which is costing them more than they think.  

The cost of poor-quality data 

Research by Gartner has found that organisations believe poor data quality to be responsible for an average of $15 million per year in losses. But, how can a company really be sure how much money they lose due to bad data?  

The generally accepted effects of bad data quality are known to be negative and significant. For example, poor data quality can lead to:  

 1.Decreased customer satisfaction: i.e. when the CRM system puts the wrong names on an email sent to loyalty programme members.  

 2.Increased project or operational expenditure (Opex):  According to the IDC, data analysts spend roughly 80% of their time searching, preparing and governing data. If data quality is not up to standard, it can cost valuable time and resources to correct errors.  

 3.Inefficient decision-making processFor example, a bank that receives faulty transactional data has to wait for the data quality team to fix it before they can react or make decisions. Or worse, they do not realise there are gaps in the data and make misguided decisions based on the faulty data.  

Haug et al (2018)¹ argues that there are two main costs associated with poor data quality; namely (1) data maintenance – to get the data cleaned and optimised, and (2) costs inflicted by poor data quality. We can categorise the costs inflicted by poor quality data into direct and hidden costs.  

Data maintenance costs are straight forward and include any costs incurred by improving, cleansing and getting data ready to be used by the company. One of the biggest headaches poor data quality creates is having to fix the errors, as this need can also lead to costly delays (in time and money), particularly if an issue is found in the end and it either takes a long time to find where the problem occurred, or it proves to be too expensive and time consuming to fix the issue. This also means that time is being (lost) spent on validating errors, that could otherwise have been utilised to analyse data for the purpose of creating innovative business strategies.  

While direct costs are much more tangible and measurable, such as manufacturing errors, wrong deliveries or payment errors, hidden costs are more challenging to measure and may include, but are not limited to, a loss in brand reputation or opportunity costs for missing a trend or business lead 

The amount of data a company has means nothing if the company can’t trust its data. Companies can spend a lot of money investing in data quality maintenance, but how does a company know when an optimal level of data quality has been reached?  

What is the optimal level of data maintenance?  

Quite simply put, the optimal level of data maintenance is not perfect data, but only a level where the costs of data maintenance do not exceed savings from the costs inflicted by poor quality data. If a company spends more on data maintenance than it saves on having better data, it becomes an overexpenditure.  

How to ensure the company minimises the effects of poor-quality data? 

The best way to minimise the risks of poor-quality data to the company is by implementing a data quality practiceThis is an intentional set of processes aimed at improving the quality of a company’s data over time, that looks at assessing tool requirements and implementing relevant and needed tools, monitoring the quality of data and integrating it with a broader Data Governance practice.  

How to implement a data quality practice? 

At PBT Group, we suggest a three-phase model to get this right: 

A data quality practice can give the company more reliable data for accurate analytics, easier regulatory compliance and reporting, as well as an increase in revenue by making better business decisions.  

Bad data is bad for business. The costs of poor-quality data can be much more than companies think, with reputational damage and opportunity costs factored in. However, with a clear data quality practice plan in place, a company can minimise the risks of poor-quality data – and ensure it is gaining the maximum ‘bang for buck’ value from available data.  

Source: 

¹ Haug, A. Zachariassen, F. Van Liempd, D. 2013. Journal of Industrial Engineering and Management: The Cost of Poor Quality Data. Available: Science Direct.  

SHARE:

Other Articles

Top