data

GDPR in real life – experiences of a GDPR-readiness project from a data scientist’s perspective

gdpr_ready_2.jpg

With slight exaggeration, alarm bells has been ringed for years for data management companies: 25th May 2018 is the date when GDPR, the General Data Protection Regulation of EU enters into force. After getting familiar with the topic, now the implementation should be in focus. Depending on company size, some enterprises have to work out huge changes both in their data management and data analysis processes. So what kind of tasks and challenges shall a company face to make data warehouses GDPR-ready? I’m going to tell the story using one of our clients as an example, a company that started to prepare for the GDPR process more than a year ago.

What kind of tasks does GDPR-readiness generate from data asset’s point of view? First, the data elements should be assessed that need to be protected and relating data management processes should be checked. We could say, it’s not surprising in such cases that some processes pop up which need to be refined or established.      

Only after solving these questions could we start the data warehousing work. We created a central meta database that describes where personal data can be found, what kind of data management they are involved and what purposes they are used for. In the meta database we also described the above-mentioned data management processes, and we set the data specified parameters used. 

During GDPR preparation, what can a lawyer’s most trivial advice be specialized in data protection? Here comes Dr. Gáspár Frivaldszky’s answer, a lead auditor at ABT: according to the principle of data minimization the most obvious solution is that we get rid of personal data. Anonymization is an excellent way to reach this goal. Namely, GDPR shall not be applied to anonymized data – we call a dataset anonymized if the subjects are no longer identifiable. Anonymized data is not personal data. In this respect, Personal Data Protection Commission of Singapore released a useful guide a couple of weeks ago.

The second big challenge was that we had to figure out how to meet the GDPR requirements and still avoid ruining our business goals at the same time during anonymization. For example, if we have to mask a subject’s mother’s name, it may cause less problem in customer-based reports but if the subject’s transaction history or geo-demographical data becomes inaccessible that might block an automatized process, report or client-segmentation. 

The next step is data cleansing. Here we also have to pay attention to business aspects. Data cleansing processes are strongly based on personal data, because we create client segments or joins based on this data. Given anonymized and masked subjects, adjustments and modification are needed in data pairing algorithms. One possible solution is to “freeze” client segments and their attributes derived from earlier masked subjects. Thus, these groups or attributes keep their previous state and they no longer (or less frequently) participate in daily data cleaning process. 

To give a brief summary after several months of work I can say that the GDPR process requires a long preparation and enormous work at any company with at least thousands of customers. However, besides complying with the law, the outcome provides more transparent data management and processes, as well as more precise and structured data for the company. The data warehousing project of GDPR preparation has shown another important experience: you have to take time to ensure that every participant speaks the same language: people on the business side, focusing on correct client service and sales targets; lawyers dealing with new data management rules and data scientists, being responsible for implementation.


Hiflylabs creates business value from data. The core of the team has been working together for 15 years, currently with more than 70 passionate employees.

 

Data enrichment

cuisine-200x300.jpg

In practical data mining it is a common experience that there is much more room to enhance forecast by introducing new aspects, than reshaping the type of the model used. Sometimes it is simplified by saying: “more data, better forecasts”. More data here should not mean more gigabytes, but new approaches, which ensure that we can describe clients’ behavior in-depth. For example, if we want to predict the expected purchases, we may include data of the customer service system, not just build upon past purchases.

This is data enrichment. A quite onerous work.

On the one hand, you have to forget the rule which says: “data scientists’ work is to make the best forecast out of the given data”. Instead, you have to think about new data options which help you to get new aspects.

On the other hand, significant resources should be invested in data preparation. Because, if we had valuable data that is easy to get; probably we would already have it. Therefore, we must be careful with the data sources we intend to use. It could easily take tremendous time to acquire a data source, merge it with our present data, and finally adapt it for analysis.

Regarding data enrichment, it may seem to be an obvious solution to get data from the internet – at first sight. Why does this solution get preferred? Surprisingly, many times inner company data are much more difficult to access (need to be requested from other units, writing order documents, gather legal department acceptances, etc.) Purchasing data is often not possible from professional data services because of the price, the difficulty of the procurement process – or simply because of the lack of appropriate data.

Public data, which can be obtained easily, is – on the contrary – the Wild West itself. A separate post is needed to discuss the challenges if we choose this solution.

I personally would prefer the middle ground. I think that the best form of data enrichment would be if different companies shared their data with each other. It is not impossible, also legally, complying data protection legislation. Personal data protection can be complied if the shared data is never client-level but refers to micro segments. Micro segments are formed by categories like geo-demographic factors (age, gender, education, etc.), income, social status, etc.

For example, a utility can share the average invoice value, a telecommunication company the mobile data usage or a bank the card transactions. For giving this data, they could ask for money from their partners. I have already encountered such agreements on the market, but only on a pilot basis.

What kind of data would you share gladly? What kind of data would you pay for in return, which could make your work more effective?


Hiflylabs creates business value from data. The core of the team has been working together for 15 years, currently with more than 70 passionate employees.

 

I don’t like average, I vote for median

Attention! Statistical topic coming! Attention! It’s so easy to interpret that it’s no excuse to skip it!

09-áltag.jpg

We live in the era of data: it is mass produced, it is quite easy to access and can be processed extremely fast. We should not lose the focus in this accelerated process from the purpose of the KPIs created from the data. Are we sure the KPIs and statistical measures looked practical 20-30 years ago are still go best?

Let’s have a look at the most popular and the trickiest statistical indicator, the simple average.  The general view is that the average salary describes well the typical salary of a population. However the average is usually much higher than the more representative median. To understand what median means, think about a group of kids lined up based on their height. The height of the kid standing in the middle of the line is the median height of the group. Based on this the median salary in Hungary is the point at which half of the population earns more, and half of it earns less. Based on EUROSTAT data the median salary in Hungary in 2016 was 4772 EUR while the average salary was 5397 EUR. We would like to believe that the “average costumer” manages 5397 EUR, but unfortunately the 4772 EUR is much more accurate. Median tends to be lower, but tends to be truer than average.   

We experience much bigger differences in the enterprise datasets, especially if the calculation is based on fewer clients or wider range. For example: we have 100 businesses in a database with a typical income around 1 million USD. In this case, if a company with an income of 100 million USD enters the dataset the average goes to double just because of that one firm! Still, the group is better represented by the value of the median which is still 1 million USD.

Another example is how to decrease the average call time of 10 minutes of a call center employer who makes 100 calls with 30 seconds each. A crafty analyst who knows about the nature of average would suggest to skip the only 1-hour-long call and so, the target is reached! But did the performance of the employer changed? No, because the median call time remained the same.

It is advised to remember how fragile the average can be to even only one data quality fault or to any extremity.

The average has become more popular against the median because it is easy to calculate from the values and the number of records, therefore it was easy to handle way before the era of computers and databases.

But we live in the era of databases when it is worth to use the more accurate median in the business reports to describe our clients.


Hiflylabs creates business value from data. The core of the team has been working together for 15 years, currently with more than 50 passionate employees.

Data: Power or Democracy?

02_vessel-624x412.jpg

We are witnessing an interesting battle in huge enterprises.

Managers on different levels have realized for a while that if they are the ones who access the data assets of their company than they can strengthen their positions best. Many have also recognized that it is worth taking the lead when it comes to data warehouses and analytical systems. By being the source of the initiative they can shape the data structures according to their own perspectives. Not only does it help them to manage the important processes but they can also gain advantage due to the fact that sooner or later others will turn to them for information. (Not many calculated the extra workload it means for their departments as – regardless of their original role – they will become data providers).

Data has become a factor of power in the eyes of many managers.

On the other hand, workflows have become more data intensive than ever before, even for those who work at the bottom of the organizational hierarchy. Most of the modern organizations have recognized that it is worthwhile to extend the rules of democracy to the usage of data and to allow their employees to access the data. Thus, fewer steps are required in the workflows and decisions are made more quickly. What is more, employees will become more motivated as they see the general goals clearer. By noticing this trend, all significant Business Intelligence (BI) suppliers have moved towards self-service systems. The experts can get answers to more and more complex problems with an interface that is easy to understand and to operate – the data as a factor of power has started to slip out of the hands of overbearing managers. Interestingly, the start-ups in the field of Big Data have set similar goals. They provide access to Big Data datasets through a simple interface; in this way, no special knowledge is required to analyse data.

I believe that the democratization of data is irreversible. Although it is possible to use politics in a clever way within the rules of democracy, different tools are necessary than in the era of absolute monarchies…


Hiflylabs creates business value from data. The core of the team has been working together for 15 years, currently with more than 50 passionate employees.


The picture was taken by the Royal Navy.