Clearview AI fined in UK for illegally storing facial images

Getty Images

Facial recognition company Clearview AI has been fined more than £7.5m by the UK’s privacy watchdog and told to delete the data of UK residents.

The company gathers images from the internet to create a global facial recognition database.

The Information Commissioner’s Office (ICO) says that breaches UK data protection laws.

It has ordered the firm to stop obtaining and using the personal data of UK residents.

The BBC has approached Clearview AI for comment.

‘Unacceptable’ data use

The ICO says that, globally, the company has stored more than 20 billion facial images.

Clearview AI takes publicly posted pictures from Facebook, Instagram and other sources, usually without the knowledge of the platform or any permission.

John Edwards, UK Information Commissioner, said: “The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable.”

Mr Edwards continued: “People expect that their personal information will be respected, regardless of where in the world their data is being used.”

The ICO said Clearview AI Inc no longer offered its services to UK organisations but, because the company had customers in other countries, it was still using personal data of UK residents.

In November 2021, the ICO said that the company was facing a fine of up to £17m – almost £10m more than it has now ordered it to pay.

The UK has become the fourth country to take enforcement action against the firm, following France, Italy and Australia.

‘Search engine for faces’

The company’s system allows a user to upload a photo of a face and find matches in a database of billions of images it has collected.

It then provides links to where matching images appear online.

The ICO found that Clearview AI Inc breached UK data protection laws by failing to:

  • use the information of people in the UK in a way that is fair and transparent
  • have a lawful reason for collecting people’s information
  • have a process in place to stop the data being retained indefinitely
  • meet the higher data protection standards required for biometric data

It also found the firm had requested additional personal information, including photos, when asked by members of the public if they are on their database.

The ICO’s action comes after a joint investigation with the Office of the Australian Information Commissioner.

Mr Edwards said: “This international co-operation is essential to protect people’s privacy rights in 2022.

“That means working with regulators in other countries, as we did in this case with our Australian colleagues.”

Analysis box by Zoe Kleinman, technology editor

Clearview AI has long been a controversial company.

Its founder Hoan Ton-That insists that the firm’s mission is to “help communities and their people to live better, safer lives” and that all the data it has collected is freely available on the internet. He says Clearview’s enormous database of faces has successfully helped law enforcement to fight “heinous” crimes.

Clearview no longer does business in the UK, but its previous clients include the Metropolitan Police, the Ministry of Defence, and the National Crime Agency. However, its entire database of 20 billion images, which inevitably includes UK residents, will still be available to those it works with in other countries.

Will we ever know who was actually on it? Probably not – but if there are photos of you on the internet, then you may well be. And you are very unlikely to have been asked if that’s OK.

When Italy fined the firm €20m (£16.9m) earlier this year, Clearview hit back, saying it did not operate in any way that laid it under the jurisdiction of the EU privacy law the GDPR. Could it argue the same in the UK, where it also has no operations, customers or headquarters?

It can now challenge the ICO’s decision – and perhaps it will.

You may also like...