database

GDPR in real life – experiences of a GDPR-readiness project from a data scientist’s perspective

gdpr_ready_2.jpg

With slight exaggeration, alarm bells has been ringed for years for data management companies: 25th May 2018 is the date when GDPR, the General Data Protection Regulation of EU enters into force. After getting familiar with the topic, now the implementation should be in focus. Depending on company size, some enterprises have to work out huge changes both in their data management and data analysis processes. So what kind of tasks and challenges shall a company face to make data warehouses GDPR-ready? I’m going to tell the story using one of our clients as an example, a company that started to prepare for the GDPR process more than a year ago.

What kind of tasks does GDPR-readiness generate from data asset’s point of view? First, the data elements should be assessed that need to be protected and relating data management processes should be checked. We could say, it’s not surprising in such cases that some processes pop up which need to be refined or established.      

Only after solving these questions could we start the data warehousing work. We created a central meta database that describes where personal data can be found, what kind of data management they are involved and what purposes they are used for. In the meta database we also described the above-mentioned data management processes, and we set the data specified parameters used. 

During GDPR preparation, what can a lawyer’s most trivial advice be specialized in data protection? Here comes Dr. Gáspár Frivaldszky’s answer, a lead auditor at ABT: according to the principle of data minimization the most obvious solution is that we get rid of personal data. Anonymization is an excellent way to reach this goal. Namely, GDPR shall not be applied to anonymized data – we call a dataset anonymized if the subjects are no longer identifiable. Anonymized data is not personal data. In this respect, Personal Data Protection Commission of Singapore released a useful guide a couple of weeks ago.

The second big challenge was that we had to figure out how to meet the GDPR requirements and still avoid ruining our business goals at the same time during anonymization. For example, if we have to mask a subject’s mother’s name, it may cause less problem in customer-based reports but if the subject’s transaction history or geo-demographical data becomes inaccessible that might block an automatized process, report or client-segmentation. 

The next step is data cleansing. Here we also have to pay attention to business aspects. Data cleansing processes are strongly based on personal data, because we create client segments or joins based on this data. Given anonymized and masked subjects, adjustments and modification are needed in data pairing algorithms. One possible solution is to “freeze” client segments and their attributes derived from earlier masked subjects. Thus, these groups or attributes keep their previous state and they no longer (or less frequently) participate in daily data cleaning process. 

To give a brief summary after several months of work I can say that the GDPR process requires a long preparation and enormous work at any company with at least thousands of customers. However, besides complying with the law, the outcome provides more transparent data management and processes, as well as more precise and structured data for the company. The data warehousing project of GDPR preparation has shown another important experience: you have to take time to ensure that every participant speaks the same language: people on the business side, focusing on correct client service and sales targets; lawyers dealing with new data management rules and data scientists, being responsible for implementation.


Hiflylabs creates business value from data. The core of the team has been working together for 15 years, currently with more than 70 passionate employees.

 

I don’t like average, I vote for median

Attention! Statistical topic coming! Attention! It’s so easy to interpret that it’s no excuse to skip it!

09-áltag.jpg

We live in the era of data: it is mass produced, it is quite easy to access and can be processed extremely fast. We should not lose the focus in this accelerated process from the purpose of the KPIs created from the data. Are we sure the KPIs and statistical measures looked practical 20-30 years ago are still go best?

Let’s have a look at the most popular and the trickiest statistical indicator, the simple average.  The general view is that the average salary describes well the typical salary of a population. However the average is usually much higher than the more representative median. To understand what median means, think about a group of kids lined up based on their height. The height of the kid standing in the middle of the line is the median height of the group. Based on this the median salary in Hungary is the point at which half of the population earns more, and half of it earns less. Based on EUROSTAT data the median salary in Hungary in 2016 was 4772 EUR while the average salary was 5397 EUR. We would like to believe that the “average costumer” manages 5397 EUR, but unfortunately the 4772 EUR is much more accurate. Median tends to be lower, but tends to be truer than average.   

We experience much bigger differences in the enterprise datasets, especially if the calculation is based on fewer clients or wider range. For example: we have 100 businesses in a database with a typical income around 1 million USD. In this case, if a company with an income of 100 million USD enters the dataset the average goes to double just because of that one firm! Still, the group is better represented by the value of the median which is still 1 million USD.

Another example is how to decrease the average call time of 10 minutes of a call center employer who makes 100 calls with 30 seconds each. A crafty analyst who knows about the nature of average would suggest to skip the only 1-hour-long call and so, the target is reached! But did the performance of the employer changed? No, because the median call time remained the same.

It is advised to remember how fragile the average can be to even only one data quality fault or to any extremity.

The average has become more popular against the median because it is easy to calculate from the values and the number of records, therefore it was easy to handle way before the era of computers and databases.

But we live in the era of databases when it is worth to use the more accurate median in the business reports to describe our clients.


Hiflylabs creates business value from data. The core of the team has been working together for 15 years, currently with more than 50 passionate employees.