Ensuring Master Data Accuracy and Currency in Live Systems

publc-domain-photos.com

Ensuring data accuracy is a major challenge even for technologically advanced corporates. There are two aspects to this problem:

  1. Data Inaccuracy due to errors of entry
  2. Data Inaccuracy due to lack of data currency – that is, data may have changed but the change is not reflected in your automated systems.

There was a company which bought a renowned ERP system and got the best consultants to implement it. One of the systems implemented was HRMS system. Huge investment was made in the software and hardware, in people, in training, and a team was created consisting of technical and functional experts. The last was something which never happened in this company which had significant automation through software developed in-house (including in-house developed HRMS which was now to be scrapped). Never in the history of the company did a team of functional experts from the department move out of the department to devote full time on in-house IT project, leave alone even have a namesake team of functional experts set aside within their department to assist in IT projects.

After several months of trials (again, a practice which was unheard of while implementing in-house systems), the consultant team left and after years of implementation, this company still did not even know the correct head count in the company. Of course, the culprit was not the software tool, but the people and processes around it.

Changes to the employee data happened in the company, like new recruitments, resignations, internal transfers, people movement, etc., and they were not being reflected correctly in the HRMS reports. Obviously, someone who was responsible for prompt entry of this data was not doing it. Department wise, team-wise headcounts, which were very critical for accurate reporting, never matched with reality. There were delays of upto 15 days for entry of a new hire into the system. After a massive effort, the process for new recruits was trimmed. But other changes in people status continued to be a problem.

The company tried a centralized service for all data capturing. Data from all sources was sent to a centralized team which entered into the ERP. This led to other serious problems and there was no improvement in the promptness and accuracy of capturing employee changes. Centralized data entry was like reliving the problems of decades old data entry practice which was in those days limited by technology. New technology later brought relief to some of these problems through distributed data entry. Whereas this company decided to go back to the age old centralized data entry and paid the price for it.

Let us analyze the problem. There can be primarily three issues in data capturing in this situation of centralized data entry:

  1. Data transcription errors
  2. Incompleteness of data – some fields missing such as codes
  3. Transaction not communicated at all – i.e., one change in employee which happened was never informed to HR

The data generally moves in the following way in this centralized data entry process:

clip_image001[1]

There are transcription errors when data moves from one stage to another.

The solution in this case is based on two best practices or principles which I have defined.

  1. The data should be entered at source.
  2. The data entry process should be an integral part of the transition process so that the process cannot be completed unless the data is entered.

Let me explain these principles.

Enter data at Source

This says that if you want to improve data accuracy of your computerized system, you must ensure that the data is entered or captured where it originates, or at the place where it is first known, or by the person who makes the decision leading to the generation of that data to be entered.

If the data is not captured at source, it needs to be communicated to the persons entering the data. This is prone to errors of transcription, errors due to wrong coding, and also delays or omissions in communicating the change. This can lead to disastrous results.

The first two problems stated above (Data transcription errors and Incompleteness of data) can be minimized by moving data capture closer to the source, i.e., the decision maker:

clip_image001[1]

Let us take the example of Employee team transfers as one type of employee transactions as the maximum volumes of transactions were on account of team transfers.  The IT folks should find out where and how the data originates. It was found that the Team Managers decide to transfer employees from one team to another for some business reasons as we shall see. Then the data capture should be done right there – by the team managers. The question is – do we make the Team managers enter data? No. The answer lies in the next best practice.

Making data capture an integral part of the transfer process

The third problem (Transaction not communicated at all ) cannot still be achieved merely by moving the data capture to source. The third can be eliminated only by making the data capture process an integral part of the transaction which needs to be captured. Now this is a little complex to do.

Let us take the example of team transfer transactions. The transfer should not happen unless the data capture is done. The trick is to make the data capture an integral part of the transfer process. By making it an integral part of the process, what I mean is that we need to provide such a tool which is essential for executing the transfer process and also captures the data. That is, the process cannot be completed unless the tool is used. To make the tool such integral part of the process which the decision maker does not skip to use in executing the transfer process, we must make the tool such that it helps the decision maker in the decision making process and in completing the process. You can hook the decision maker to the tool only if he finds it useful in his decision making itself. If the tool aids in decision making, the decision maker will certainly use it and data capture becomes a by-product of the process rather than an overhead.

To create such a tool, the systems analyst needs to dig deeper,which is what very few systems analysts do. Most of the analysts hear out what the user tells them and document the process without asking why and how they do what they do. The analyst needs to study how the decision is made, what data is required for decision making. He needs to see why the need to transfer arises and how it is executed. The team managers  need to transfer the team members on considerations like team performance and individual performance to balance team efficiencies. They may also need this on account of external factors like resignations, new joinees, and need to transfer some members to new businesses due to company growth. 

Having studied this, the analyst can provide a tool so that it aids him to do his duty and decision making by actually providing him all the data that he needs to make these decisions. For instance, the team leader needs to know individual and team performance, past history of employees, his skills, team resignations, etc. Can we create a tool where the team leader gets team members listed in buckets, and the team member can drag and drop names to reorganize the teams. He will have all the necessary information on fingertips that he needs to decide who should be transferred, and tools to do what if analysis to see what is the impact on team performance if team members are moved. At the end of his analysis, he will finalize the team structures and submit, and all the necessary transfers request are automatically posted into the HRMS system. It may go for approval and on approval update the corresponding employee data. There can be preferably a future effective date for transfers, so that the data is all ready and approved before the due date, and on the due date it gets automatically updated into the HRMS system. There can be no delays in data entry, no chances of any errors as the Team leader would have spent time analyzing and rechecking his decisions.

So the tricky job of making data capture an integral part of the process is achieved indirectly – not by forcing him to capture data, but by luring him to a tool which helps him to make decisions and without which he would not like to do the transfer. While making decisions, he has actually already entered the data, so there is no need of a separate data capture process which can be missed out.

What are the benefits in this process?

  1. As stated before, there can be no errors in transcription as there is no data communication and transcription.
  2. The chances of data not communicated or entered are completely eliminated as the team manager cannot perform the transfers unless he uses the new tool provided. In fact he now would not do the transfers without using the tool as it aids him in the decision making process and helps him optimize his transfers itself.

Advertisements

4 Responses to “Ensuring Master Data Accuracy and Currency in Live Systems”

  1. Canadian Says:

    One man can be a crucial ingredient on a team, but one man cannot make a team. ~
    Kareem Abdul-Jabbar

    Quotes on Team work

  2. music Says:

    very interesting.
    i’m adding in RSS Reader

  3. Venkat J. Says:

    Great site…keep up the good work.

  4. Engen Renell Says:

    Good suggestions for ensuring data accuracy. Data entry at source has helped my inavoiding errors due to transcription and multiple entries.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: