Why HCM Data Quality Matters

The following blog post was originally posted to the ROC UK web site on 26 August 2014.

There’s an acronym in computer science that holds true in many areas of business including that of HCM - GIGO. It stands for “Garbage In, Garbage Out” and refers to the fact that computers will process things literally and if you enter data that is no good you’ll end up with data that is no good. Even when you put an intelligent human in the mix and give them poor quality data you’re more than likely to end up with rubbish results.

So what impact does this have on businesses? Well it can hit the bottom line pretty hard.

  • Data issues can lead to decisions being made based on inaccurate and erroneous information or opportunities being missed. These in turn lead to wasted effort, re-work and a loss of earnings.
  • It can lead to loss of trust in a particular department or area where data is known to be out of date or poorly maintained.
  • Sometimes poor quality data can result in the breach of contracted or legal commitments which can have a direct financial implication.

HCM data is no exception and since an organisation’s greatest asset are its people then it stands to reason that the data about those people is also very valuable to an organisation. With big data initiatives also drawing upon HCM data this has an even further reaching implications than in previous years.

So how do you tackle poor data quality? The usual approach is a combination of fixing the issue and then (assuming it is cost effective to do so/warrant it) taking steps to address the source of the issue to mitigate it being a recurring issue. In order to do the latter though you also need to locate the source(s).

The predominant sources for poor quality data are people. These could be honest typos (particularly on those small mobile device keyboards), ambiguous questions on data entry or even transcription of bad handwriting. But we can get more autonomous erroneous data introduction via imperfect data loads from other systems, inaccurate OCR (Optical Character Recognition) transcription. From scanned documents/forms or even glitches in peripherals such as scanners, time clocks, etc.

Humans are amazingly adaptive and great at spotting patterns which means that we’re actually pretty well equipped to recognise data issues. Sometimes we can even be quite good at rectifying and resolving them. The thing we’re not so good at is doing it quickly and at scale. As soon as speed and size are involved then we increase the risk of missing issues or introducing new ones. Fortunately computers are really good at speed and scale and there are solutions out there that can help organisations with their HCM data quality.

Of course not every organisation has poor quality data. But how many organisations actually know what the quality of their HCM data is? How do you know it won’t succumb to a bad data source in the future? These solutions can also help to objectively measure the quality of your data and continue to proactively monitor its quality over time.

Humans and technology may be the source of poor quality data, but thankfully they are also the solution.

Author: Stephen Millard
Tags: | sap |

Buy me a coffeeBuy me a coffee



Related posts that you may also like to read