Really! Why is my big data a time bomb?
Over $100 billion fines were paid in US for non-compliance since 2007! More than $2.5 billion in 2015 alone was a result of incomplete and inaccurate data used for complying with anti-money laundering (AML) regulations. It’s not just financial institutions, but most firms are sitting on a ticking time bomb. How? Dr Seth Rao, CEO of FirstEigen, explores the issue.
Data is becoming untrustworthy if not verified. A 2013 survey of data management professionals revealed that data quality, accuracy, reconciliation are significant problems in big data projects. With increase in data volume, variety and velocity, data quality has become more important than ever. As more business processes become automated, data quality becomes the rate limiting factor for overall process quality. Trustworthiness of big data, at best, remains questionable and as a result big data projects are failing to deliver intended returns across the industry.
Untrustworthy data is very expensive. Gartner reports that 40% of data initiatives fail due to poor quality of data and affects overall labor productivity by ~20%. That is a huge loss on which it’s hard to even put a cost figure on! Forbes and PwC report that poor data quality was a critical factor that led to regulatory non-compliance. Poor quality of big data is costing companies not only in fines, manual rework to fix errors, inaccurate data for insights, failed initiatives and longer turnaround times, but also in lost opportunity. Operationally most organisations fail to unlock the value of their marketing campaigns due to data quality issues.
Why is the current approach to data quality inadequate? When data flows at a high volume, in different formats, from multiple sources, validating it is a nightmare. Big data teams rely on ad hoc methods of the regular data world, like writing big data based scripts, to validate the data. They run into three major problems:
- Highly susceptible to human errors and system-change related errors;
- Retrofitting regular data validation tools for big data needs sampling, so 100% of the data is not checked;
- Architectural limitations of existing data validation tools make such approaches non-scalable, unsustainable and unsuitable for big data.
The above problem is exacerbated when big data platforms are thrown into the mix. For example, transactions may flow into an operational “No-SQL” database (MongoDB, Datastax, etc) and then to a Hadoop data storage repository which may reside on the cloud. Interactions with a traditional data warehouse are guaranteed along the way as well. In such scenarios, script-based solutions do not work very efficiently and do not provide an end-to-end perspective on data quality.
This translates into big data projects spending 50-60% time and money to detect and fix quality issues. Despite significant effort and investment in ensuring the quality of big data it’s at best questionable.
Solution: Organisations should only consider big data validation solutions that are equipped to ingest data at high velocity across multiple platforms (regular and big data platforms), parse variety of data formats without transformations, and are scalable as the underlying big data platform.
They must be enabled for cross-platform data profiling, cross-platform data quality tests, cross-platform reconciliation and anomaly detection. Seamless integration with the existing enterprise infrastructure systems such as a scheduling system is needed for operationalising data quality end to end.