In the news: automated decision-making

Over the last week, automated decision-making featured in the news. Ofqual (the Government agency regulating qualifications and exams for England) used an algorithm based on statistical modelling to issue A-level grades for students. Around 40% of assigned grades turned out to be much lower than teachers’ assessments. This resulted in serious impacts to young people’s University and employment prospects.

At the heart of objections to Ofqual’s approach to grade assignment were significant data protection concerns about fairness, transparency and accuracy. After a public outcry, assigned grades based on Ofqual’s statistical model were withdrawn and replaced with teacher-assessed grades.

By now, GDPR requirements for fairness, accuracy and transparency are fairly commonly-known. However, this particular set of events also put other aspects of data protection law onto the public’s radar – those relating to automated processing.

‘Automated processing’ is addressed in Recital (71) and (75), while Article 22 outlines specific data subject rights when decisions affecting them are made based on automated processing of their personal data.

What does ‘automated decision-making’ mean?
‘Automated decision-making’ means, essentially, using computers to make decisions about people, based on processing of their personal data. Article 22 applies to decisions that are produced through solely automated means, where processing carried out on personal data without any human intervention or influence while the decision-making takes place. This type of processing by its nature represents a higher risk to the rights and freedoms of the data subject.

When is ‘automated processing decision-making’ allowed?
Because of the risk to individual rights and freedoms, if solely automated decision-making has ‘legal effects’ or ‘similar’ significance, then it is generally prohibited unless certain conditions apply. Those conditions are:

22.2.a: where the processing is necessary for a contract between Controller and data subject.

  • This requires that the automation be critical to entering into or performing the terms of the contract. There must be a compelling reason to rely on automation without human input.

22.2.b: EU or Member State law applicable to the Controller authorises the processing and sets out suitable safeguards to protect data subjects’ rights, freedoms and legitimate interests;

Or
22.2.c: the data subject has given explicit consent for the processing.

  • Consent must be informed, freely-given, specific and unambiguous in order to be valid.

When special categories of personal data are concerned the only lawful bases that can be applied are explicit consent (9.2.a) or substantial public interest (9.2.g), and only where suitable safeguards for the rights, freedoms and interests of data subjects have been put in place.

Is automated processing the same as profiling?
No; ‘solely automated decision-making’ and ‘profiling’ aren’t quite the same thing, although they may often overlap. ‘Profiling’ is when an individual’s personal data is used to make judgements or predictions about their personal characteristics or behaviour. It can include elements of human intervention in the processing. ‘Automated decision-making’ does not necessarily produce judgements or predictions, and has no element of manual, human involvement.

Got any questions, contact us here…