The European Data Protection Board has set out its doubts on the lawfulness of Clearview AI’s facial recognition app in a letter to concerned MEPs.  Clearview AI is an American company with a facial recognition product which has been reported as being widely used by US law enforcement agencies and private companies.  Their product has caused some controversy, as it works by “scraping” information from the internet, cross-referencing images and video footage with publicly available images and information to identify individuals.   

The EDPB’s letter is without prejudice to any future investigation, and notes that several EDPB members have already started to inquire about the use of facial recognition technologies in their respective jurisdictions.  For now, the EDPB’s position is that Clearview AI’s facial recognition product does not meet the conditions set out in the Law Enforcement Directive.  The Law Enforcement Directive does allow law enforcement authorities to process biometric data for the purposes of uniquely identifying a natural person, but under strict conditions.  These conditions include ensuring that processing is strictly necessary and subject to appropriate safeguards for therights and freedoms of data subjects.  Lawful processing must also comply with strict conditions for any transfers to private operators in third countries outside the EU.  Sharing data with a private party outside the EU and the biometric matching of such data against a “mass of arbitrarily populateddatabase of photographs and facial pictures accessible online” would not, in the EDPB’s view, be lawful. 

Clearview AI’s facial recognition product is also being challenged in the US: a class action lawsuit has been filed by the American Civil Liberties Union and a private practice law firm against it for allegedly storing individuals’ biometric information without their knowledge. 

We are likely to see further debate on facial recognition in the coming months.  A recent draft of an EU Commission White Paper on AI caused a stir by suggesting a possible moratorium on the use of facial recognition technology for three to five years, but dropped that suggestion in the final version.  Convenience may drive businesses to continue to experiment, despite the controversy surrounding the technologies: Eurostar recently announced that it is rolling out a facial verification process for itspassengers.  However, serious concerns have been raised about the ability of facial recognition systems to identify black and minority ethnic individuals accurately and their use by law enforcement agencies.  IBM recently published an open letter to the US Congress explaining that it was no longer offering general purpose facial recognition or analysis software, emphasisingthat it “firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms”.  Amazon also announced that it would stop police forces from using its Rekognition software for one year, to give US Congress time to enact legislation governing its use.  With these issues firmly in the spotlight, organisations intending to use facial recognition will not be able to escape considering whethertheir product can accurately identify people of colour.