IBM unveils world-first machine learning training method for GDPR-compliance


Connor Jones

25 Nov, 2021

IBM researchers have unveiled a novel method of training machine learning (ML) models that minimises the amount of personal data required and preserves high levels of accuracy.

The research is thought to be a boon to businesses that need to stay compliant with data protection and data privacy laws such as the General Data Protection Regulation (GDPR) and the California Privacy Rights Act (CPRA).

In both GDPR and CPRA, ‘data minimisation’ is a core component of the legislation but it’s been difficult for companies to determine what the minimal amount of personal data should be when training ML models.

It’s especially difficult when the goal of training ML models is usually to achieve the highest degree of accuracy in predictions or classifications, regardless of the amount of data used.

The findings from the study, thought to be a world-first development in the field of machine learning, showed that fewer data could be used in training datasets by undergoing a process of generalisation while preserving the same level of accuracy compared to larger ones.

At no point did researchers see a drop in prediction accuracy below 33% even when the entire dataset was generalised, preserving none of the original data. In some cases, the researchers were able to achieve 100% accuracy even with some generalisation.

In addition to adhering to the data minimisation principle of major data protection laws, researchers suggest that smaller data requirements could also lead to reduced costs in areas like data storage and management fees.

Data generalisation process

Businesses can become more compliant with data laws by removing or generalising some of the input features of runtime data, IBM researchers showed.

Generalisation involves taking a feature value and breaking it down into specific values and generalised values. For a numerical feature ‘age’, the specific values of which could be 37 or 39, a possible generalised value range could be 36-40.

A categorical feature of ‘marital status’ could have the specific values ‘married, ‘never married’, and ‘divorced’. A generalisation of these could be ‘never married’ and ‘divorced’ which eliminates one value, decreasing specificity, but still provides a degree of accuracy as ‘divorced’ implies that an individual has, at one point, been married.

The numerical features are less specific, adding three additional values, while the categorical feature is less detailed. The quality of these generalisations is then analysed using a metric. IBM chose to use the NCP metric over others in consideration as it lent itself best to the purposes of data privacy.

Credit
IBM

Researchers then selected a dataset and trained one or more target models on it to create a baseline. Generalisation was then applied, the accuracy was calculated and re-calculated (see diagram above) until the final generalisation was ready to be compared to the baseline.

Credit
IBM

The accuracy of the target model is calculated using decision trees (see above) which are gradually trimmed from the bottom upwards, taking note of any significant decreases in accuracy.

If accuracy is maintained or meets the acceptable threshold after generalised data is applied, the researchers then work to improve the generalisation by gradually trimming the decision tree from the bottom upwards, increasing the generalised range of a given feature, until the final optimised generalisation is made.