In January 16th 2020 Bloomberg issued a short piece of news about the European Commission (EC), which apparently was going to unveil a paper about artificial intelligence (“AI”) in mid-February. According to such news agency, the EC recommended a new regulation including a moratorium of 5 years for facial recognition in public places, even though it was said that the paper final version was likely to change. Let’s try to summarise what we have as objectively as possible:
- A paper, not a law and not even a law proposal, but just an opinion. Of course, the opinion of the EC should be relevant, but in any case, it shall be submitted to the long and annoying European legislative machine, before reaching the stage of law. For instance, the first step of GDPR was an opinion of the EC dated on June 2011, that is to say 5 years (the same term of the alleged moratorium) since the first proposal to the approval of the Regulation, which was indeed very different from the first approach.
- The need of new regulation about the use of AI in sectors such as healthcare and transport, including facial recognition technologies.
- A rumour regarding a 5 years moratorium. Such an idea flies over the news as a possible element of the paper, but no consistent evidence is provided to reinforce it.
- A final version likely to be changed. In other words, the question about the moratorium is in the end sheer speculation.
What can we induce from these data?
Not much. Summing up, EC is evaluating a proposal to rule the use of facial recognition in public areas, but it is difficult to guess which will be its content and even harder to glimpse which can be the finally approved regulation, if any. Within this frame a moratorium of 5 years makes little sense, because, going back to GDPR, 5 years is the time that can elapse before the approval of a future European Regulation in the matter, provided that the legal machine was started up right now.
What has some media induced from these data?
The headlines of a catastrophe, as usual. These are some examples, from DEFCON 2 to Apocalypse:
- Politico: “EU considers temporary ban on facial recognition in public spaces”
- BBC: “Facial recognition: EU considers ban of up to five years”
- MIT Technology Review: “The EU might ban facial recognition in public for five years”
- The Telegraph: “European Commission mulls ban on facial recognition technology”
- Techerati: “Who watches the watchmen? Why Europe is right to ban facial recognition”
How much time was the alarm ringing?
Less that a fortnight we should say, because on January 30th Reuters published that EU drops idea of facial recognition ban in public areas: paper. In case that such idea has ever existed, could we say.
If we dive inside the news, we could find two very interesting things. First of all, the Commission was willing to recommend a specific regulation in some key sectors, by means of a paper expected for February 19th. Secondly, there is a priceless quote from Brad Smith, the president of Microsoft, who says that “a facial recognition AI ban is akin to using a cleaver instead of a scalpel to solve potential problems”.
So, once again, how much time was the alarm ringing?
Being honest, it is still on the run. None of the media that fed the panic have issued the denial, which have been published in other sites. In any case, outside the sector, most of the people do not know that EC is not considering, neither mulling, any kind of prohibition, temporary or not, on facial recognition. On the contrary, there are still news about facial recognition where a ban, which never existed, is still mentioned.
Did finally the EC issue a paper about AI?
Yes, they did. In due time, last February 19th in the morning the EC issued the “White Paper on Artificial Intelligence – A European approach to excellence and trust”. The content of its 27 pages can be briefed in two main objectives. First of all, to support the development of AI in the European Union (“EU”), because the EC deems it a strategical sector to preserve the EU’s economic growth and technological leadership. And secondly, the EC wants to set forth a new legal framework, avoiding the fragmentation of the single market and giving legal certainty to the citizens, in order to protect their fundamental rights (mainly security and privacy), and also to the companies working with AI.
This new legal framework shall cover only those sectors deemed as high risk, including:
- Healthcare
- Transport (especially autonomous vehicles)
- Some public services such as:
- Asylum
- Migration
- Border controls
- Judiciary
- Social security
- Employment services
- Facial recognition
Here it is, the EC report includes a specific, really short section entirely dedicated to facial recognition, from which we can synthetize the next three conclusions:
- Only facial recognition is deemed as a high-risk sector, that is to say, the identification of a person, for instance in a video surveillance system, against a database of several people. Meanwhile the access control, that is to say the authentication or verification of a person identity against the image of this same person kept in the system, is not deemed as high-risk, therefore it will be spared form this new legislation.
- There is and open debate to decide in which facial recognition can be used, in fact, the paper is open to public consultation.
- The legal framework shall be the same for all EU in order to avoid the fragmentation of the single market and to give legal certainty.
Undoubtedly this new regulation will be welcomed by all stakeholders, because GDPR, despite appearances, does not regulate enough the processing of biometric data. Although its article 9.2 sets forth several cases where the use of such data is permitted, none of the following or other exceptions has been duly developed by law afterwards:
- Explicit consent of the data subject.
- Carrying out obligations and exercising specific rights in the field of employment and social security.
- Establishment, exercise or defence of legal claims.
- Reasons of substantial public interest. In the paper, the EC deems this exception as the most suitable for facial recognition.
In conclusion, all stakeholders need, for elemental reasons of legal certainty, a secure path in which facial recognition developers, private security companies or even law enforcement bodies could use the most suitable solutions according to law, and at the same time the citizens’ rights to security and privacy are duly respected. But, of course, this approach, being a consensus solution far away from catastrophe, does not seem to be hot news.