menu-mob.png

What your Data Governance Policy has to do with User Experience

 |
March 17 2016

We’ve all seen these Design versus User Experience (UX) pictures on LinkedIn and Twitter. It’s a great illustration of the eternal battle between how a designer intends for their design to be used and how the end user uses it in a completely different way.

Design_versus_User_Experience.jpgImage: bit.ly/1XuAG0V

This picture exemplifies a universal truth – if you make something too complicated, inconvenient or unachievable, people will find a way around it. This truth applies to your business too.

In projects, you have your project management triangle1 defined by time, cost and quality. While time can be checked easily on the calendar, and costs can be checked by your controlling department, quality has to be defined in advance and tested afterwards.

Project_Management_Triangle.pngImage: bit.ly/1Lq8fRb

An example of this is Volkswagen. In this case, the engineers had to balance certain emission standards to certain costs at a certain date. Fatally enough they came to the conclusion that they could only meet time and cost if they limited their solution to meet the emission standards when tested only. So they went and sacrificed the quality. As a result, Volkswagen was caught in a crushing emission scandal.2 (We covered how data can help steer your business around such a scandal in this post).

Fake_Volkswagen_apology_poster.jpgFake Volkswagen apology posters in Paris ahead of COP21 climate talks. Image: bit.ly/1pKZBm8

Something similar can happen to your product master data.

You know high-quality product information is essential for your business. Therefore, you have a data management system and you have implemented several checks which have to be passed before the product set up process can be finished. Now you want to order a new item to make sure it’s available for the next holiday season. The only problem is that not all product details are known yet. However, you’ve defined some of the product details as mandatory because you want them for your website or maybe you need the logistics data for your storage planning. But problems can occur when your system is configured so that it allows only complete data sets to be exported, since it forces the user onboarding the product to finish the data creation process. This means that they have to fill the fields with dummy values. This can lead to issues when the data is put to use. And what you don’t want, is your customers to find a product description with 'lorem ipsum' or have your warehouse management system fail to bin the received goods because there is no bin location big enough to store a palette with dimensions 999x999x999.

Dummy values that can be recognized as such can be detected and exchanged with real values afterwards. If you add further checks that prevent dummy values from being entered, employees will most likely guess values that are more plausible but those values are just as wrong as the dummy values mentioned before. Now, instead of increasing data quality you’ve actually made it worse because you might not find the wrong data later since it isn’t as obvious. This does not mean that you should not check the data, but the checks have to be tailored to your product onboarding process. You also have to ensure that the quality gate you install is reasonable and achievable.

In a perfect world, a ‘first time right’ approach would be the best. The proactive approach of upstream prevention of bad data to enter our databases is always better than the reactive downstream data cleansing. The data is checked in the system where it is entered and business rules prevent that the data is sent to other systems before it’s complete and correct. In the best case, you have a central and integrated system for all your company data. You have clear responsibilities. Maybe you already have a data steward3. Your data steward ensures that all data-related work is performed according to your company’s data governance policies but this doesn’t mean that one poor guy has to provide all the data by himself.

Le_Petit_Prince_by_Antoine_de_Saint-Exupry.pngFrom the book “Le Petit Prince” by Antoine de Saint-Exupéry, 1943

There are few clear-cut roles and responsibilities for protecting or enhancing product information as it moves across the enterprise from design to engineering, procurement, distribution, marketing, eCommerce, and all the way to service and support. Every department is responsible for their part.

All the data you need for your ordering process must be entered before you can proceed to the next step in your supply chain. Other data can be added later by other departments. To prevent that this is forgotten and make sure everyone knows that there is data to be completed you can implement a workflow supported process. Within this workflow you can work with ‘Trigger’ attributes which prevent that data from being transferred to a system before it’s ready. Data which is not mandatory, but nice to have, can be monitored with completeness indicators. That way you can prevent that data sets with an improper level of completeness are exported. Other data aspects can be reworked and completed afterwards.

If you have monitoring in place you can easily see where data is missing or where you have values that are particularly high or low. Very seldom used values can be an indicator for typos. And if the most frequent values are ‘others’ or ‘misc’ it might also be an occasion to re-work.

If you need data for your processes but you don’t have it, you must offer your employees a way to get it, i.e. by the use of external data pools. This might cost extra money but it is always cheaper than fixing errors because of wrong data.

If you want a lasting data solution you have to think User Experience into your Data Governance Policies as well as your data workflows. This ensures high data quality starting from the product onboarding process leading all the way to the end consumer, making sure that they do not ‘cross the grass’ to run straight into the arms of your competitors.

Data Governance Data Sheet

[1] https://managementmania.com/en/magic-triangle-of-project-management
[2] https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal
[3] http://www.stibosystems.com/global/explore-stibo-systems/master-data-management/data-stewardship.aspx


Karl Meier brings more than 20 years of experience in enterprise software implementation projects and Master Data Management. In his present role at Stibo Systems, Karl is project manager and solution consultant and is implementing projects for product and customer Master Data Management in the retail and distribution sector. Follow Karl on:
Karl Meier verfügt über mehr als zwanzig Jahre Erfahrung in den Bereichen Stammdatenverwaltung und Implementierung von Unternehmenssoftware. In seiner derzeitigen Rolle bei Stibo Systems ist er als Projektleiter und in der Lösungsberatung tätig und auf die Implementierung von Produkt- und Stammdatenlösungen für Handel und Distribution spezialisiert. Folgen Sie Karl Meier:



← Previous Post
Next Post →