Facial Recognition, Privacy and the GDPR

By Rowenna Fielding

Is that a face?

Humans are wired to perceive faces – there’s even a name for our tendency to see faces in things that are not people (it’s called ‘pareidolia’), but we have an understanding of context that allows us to distinguish between ‘thing that looks like a face’ and ‘an actual person’s face’. A couple of million years of evolution has given us the ability to know which is a bell pepper and which is a person, without having to consciously work through a process of analysis to reach the answer.

Computers though, don’t have this bio-magical capacity. They can only follow instructions, and it turns out that writing instructions for how to replicate our unconscious ability to tell the difference between a cheese grater and a human is actually rather challenging. For such a system to be useful, it needs to have a low rate of both false positives (where a face-like thing is incorrectly identified as a human face); and false negatives (where a human face is not recognised as such). A human can tell when a person is wearing a clown mask versus the mask on its own, but a computer just sees a pattern which may or may not match the instructions it has been given for picking out real faces.

Uses for facial recognition

On its own, the ability use a computer to tell real faces apart from things that just look like faces is of limited value, and unlikely to have much of an impact on anyone’s privacy.

However, when we use algorithms to track where people’s faces appear, or form judgements about their features; the uses and consequences of facial recognition start to have greater impact.

Facial recognition technology can give us answers to questions like ‘have I seen this face before?’ and ‘what can I learn about this person?’, but we need to be careful about relying on those answers, because how accurate they are will depend on how carefully and rigorously the questions were designed.

At the moment, there is a lot of hype – and justifiable fear – about facial recognition technology. It’s being used in law enforcement, workplace security, school registrations, job candidate interviews. It’s being used to make judgements about

people’s ethnicity, sexuality or gender, their emotions, personality or character; and it’s even being used to try to predict people’s behaviour.

These are uses that do have significant impacts on individuals’ rights, freedoms and welfare, whether the judgements are accurate or not (and they’re mostly not).

Problems with facial recognition technology

It’s easily confused by changes in lighting or the angle that the face is viewed at.

  • Lots of people have doppelgangers
  • Biased inputs lead to biased outputs – currently, commercial facial recognition technology is much less accurate for female-presenting and non-white faces.
  • There’s no way to know it’s being used unless the fact is disclosed.
  • Judging people based on their looks does not produce reliable results
  • The practice of editing an original picture to make it more likely to obtain a match (‘normalising’ the image) skews the results

The GDPR in relation to facial recognition technologies

For the moment, facial recognition is a new technology and so most deployments are likely to result in a ‘high risk to the rights and freedoms of data subject’, especially when used for predictions, tracking, employment or public sector purposes. Therefore, a Data Protection Impact Assessment (DPIA) must be done (and done properly!). An inadequate DPIA makes it likely that the use of facial recognition technology will be unlawful – not because it’s facial recognition particularly, but because GDPR requires data protection by design and by default – and this is not usually achievable without a robust DPIA to start from.

A robust DPIA requires diversity of input, and so consulting with data subjects is a critical part of evaluating the fairness and proportionality of the processing. The people most likely to be negatively impacted by facial recognition technologies are those of BAME origin, people with injuries or conditions which affect their facial features, identical siblings, victims of tech-enabled stalking or harassment and people who identify as non-binary. For a facial recognition system to be fair and lawful, it must not produce racist, sexist, ageist or ableist results and should not put already-vulnerable people at increased risk.

The lawful basis for facial recognition will depend on the purpose it’s being used for, and how it works. Although facial geometry is a type of biometric data, as long as it isn’t being used to identify individuals, it’s not a form of special category processing. An example of this is basic sentiment analysis, where the expressions on people’s faces are observed and sorted into categories by an automated system. A shopping centre or museum might use this to work out which displays are amusing, or whether people are more grumpy on Mondays than Fridays.

However, when facial features are used for identification, it does become special category personal data, and lawful basis becomes more challenging as both Article 6 and Article 9 need to be satisfied.

‘Identification’ involves any of the following:

  • Recording a face so that it can be searched for and/or recognised in future
  • Checking for matches in a database of already-known faces (or values representing those faces)
  • Looking for or finding a particular face
  • Carrying out specific actions or measures in response to detecting a particular face

Consent is only valid if it can be given or refused before the processing takes place, if there is a genuine free choice, and if there is some positive indication that consent for the specific processing has been given, separate to any other action.

Other lawful bases may apply – but only if the use of facial recognition technology is necessary, not merely convenient or fashionable. If there is a suitable alternative (even if that alternative is a little slower, a little more resource-intensive, or a little less exciting), then facial recognition technology should not be used.

Assuming that a robust DPIA has been done and ‘data protection by design and by default’ has been applied all the way through from concept to design to testing, to deployment. GDPR-compliant facial recognition system will:

  • only be used where necessary, proportionate, and legitimate,
  • operate based on processing that is fair, lawful & transparent,
  • use accurate, adequate and only relevant data for the minimum amount of time necessary to fulfil the purpose of processing,
  • be secure against unauthorised tinkering, accidental degradation, exposure or exfiltration,
  • support the exercise of data subject rights

and all of those aspects must be demonstrated with convincing evidence.

Only as long as all those things remain true, will the use of facial recognition be compliant with the GDPR.

Facial recognition technology isn’t inherently ‘bad’ or ‘wrong’, but it is very powerful and the consequences of using it irresponsibly can be serious.

In August 2019, a watchdog has penalised a local authority for trialling facial recognition on high-school students in Sweden to keep track of attendance. Full BBC story here.

If you would like to discuss this more or have any questions around your requirements, please feel free to give us a call on 020 3691 5731, email hello@protecture.org.uk or fill out out contact from here.