AI risks: The ClearView AI case

In recent years, the ClearView AI case is one of the most important examples of face recognition technology and personal data rights interaction, an example that affected not only a whole society but also the European legislative procedure of the AI Act. Although ClearView AI application does not qualify as an AI Medical Device thus, it is not regulated as such but purely as an AI system, the importance of that example can be found in the interaction of an application with face recognition technology that collects biometric data, exactly as most of the AI Medical Devices with the rules of the GDPR: It was the New York Times that revealed the existence of Clearview AI to the world. Clearview AI, was a very small company, a startup company which provided facial recognition software. Such company has actually compiled a database of facial matches that was collected from the internet, not only from social media but also professional platforms, YouTube, Twitter, Linkedin, Instagram, everything that one could think of where a facial image or a video is present. All these data were gathered and compiled into a database. The existence of this company was unknown untill January 2020, when the New York Times published this informative article about the secretive company that might end privacy in the world. The procedure of the ClearView AI bussiness was the following: The AI system could track and take a picture of someone and upload it to the dedicated database where there could be matched to other pictures matching to the one that was uploaded. There were 3 billion images at the origin in the initial data base. By 2021, according to a letter that was published by the owner of Clearview AI, it was announced that there are more than 10 billion images in the database. On October 2021 many data protection authorities launched investigations on the practices of Clearview AI but they did not have the effect that should have and the company is still scrapping images. One of the most interesting parts of the case is the way the platform functions and the “products” it sells to other stakeholders, since the access to the database is sold to law enforcement authorities but also to private companies, as revealed in January 2020. The idea was quite simple anyone could visit the website of Clearview AI, upload a picture of someone, as for example a picture of a suspect, and then it will run it into the search engine on the website of Clearview AI. The uploaded image will be searched against other images that are very similar to this one and this is were the data protection rights violations are traced, on the how these images were collected and processed to obtain the data base. An automatic comparison of the biometrics and the characteristics is made and a matching result is produced. The final result will be served to the client along with a url where they can look at these images. As expected, a reservation notice is added to the results report by Clearview AI, stating that they should not be used as evidence, but only as leads for the law enforcement authority. Once its practice was known, several civil rights organizations started complaints against Clearview AI while also various concerns were raised in other scientific levels. The first one to start a complaint is the “None of your business” organization which is a civil rights organization that filed a complaint before the european data protection authority which issued an order against Clearview AI.In Canada and Australia, both data protection authorities have issued their decisions concerning the practices of Clearview AI. In Canada, the police was cooperating with ClearView AI until the scandal came up, which led to massive claims against them, initiated research by the authorities and eventually forced ClearView AI out of the country. Several EU countries did the same as well, while in Italy a 20 milion fine was imposed to AI ClearView. However, the EU data protection framework is composed of: – A general instrument (GDPR) the applicability of which to Clearview AI’s practices is challenged due to its territoriality. – A specific instrument in the field of law enforcement (Law Enforcement Directive (LED)) that needs to be implemented at national level and – National rules applicable to law enforcement authorities when they use Clearview AI’s tool. With respect to the territorial scope of GDPR, article 3, as completed with Recital 24 provides the main rule, which is that GDPR applies to the controllers (entities in charge of the processing) and processors (entities acting on behalf of controllers) that have an establishment in the EU, but in the absence of establishment, the GDPR applies to entities either: 1. Offering goods and services to individuals in the EU or 2. Monitoring their behavior as far as their behavior takes place in the EU, which adds an extraterritoriality aspect. In the case under examination, Clearview AI does not have an establishment in the EU. It is not offering services or goods to data subjects but it is monitoring the behavior of data subjects in the EU. This means that individuals must be ‘targeted’ in the EU. It is not the place of residence that counts but the location of the data subjects when behavior monitored. These are cumulative conditions according also to the EDPB’s Opinion 3/2018 on the territorial scope of the GDPR. Further to the above, there is also the question of the associated information that can be compiled together, to empirically identify the individual. It is also the case that this information is personal data. What is interesting to define is whether they are biometric data and sensitive data. According to the GDPR definition of biometric data, they are isolated from the rest of the context, which might lead to the conclusion that photographs collected by Clearview AI are collected for biometric identification purposes. Recital in art 51 of GDPR, however, states that photographs if found online they are not considered biometric data, unless processed to allow identification of a natural person – this is to avoid the burden which would follow by considering all photos online as data. Furthermore, besides being personal data, the photographs transformed into biometric data are also sensitive data. Article 9 (1) of GDPR provides for a higher level of protection for sensitive data (such as data relating to health, ethnicity etc.). The exhaustive list includes biometric dara for the purpose of uniqely identifying a natural person. In conclusion, mere photos collected from online sources are not qualified as biometric data, but can be sensitive data if they reveal sensitive information. As soon as these data are processed for identification purposes, such as the extraction of their features and the transformation into biometric remplates, ther are turned into biometric sensitive data. Clearview AI did not collect biometric data (photographs found online), but as soon as they process them for identification purposes, these images became biometric sensitive data and their processing is regulated not only under GDPR but also under article 10 of the Police Directive. The images and the associated information that was collected was neither publicly available, since the websites from where they are collected was accessible but not publicly available and not freely reusable. The principles of transparency, fairness and lawfulness were also violated by ClearView’s AI practice. According to the principle of transparency, individuals should be notified that their data were processed, which is not the case for ClearView AI. Fairness which means that the processing is in line with individuals’ reasonable expectations of privacy, since individuals publishing photographs of themselves (or third-parties) would expect their images to be processed for identification purposes, including for police purposes and lawfulness which means that the processing must comply with one of the legal grounds provided by GDPR to process personal and sensitive data. Lawfulness is also breached because the processing is not grounded on a legal basis. In general, both for public authorities and private entities, artificial intelligence systems that categorise individuals by biometric data (for example, by facial recognition) into groups according to nationality, gender, as well as political or sexual orientation or other grounds of discrimination prohibited under article 21 of the European Charter of Fundamental Rights, as well as artificial intelligence systems whose scientific validity has not been proven or which are in direct conflict with the scientific validity of the data, could be particularly useful. The above prohibition should also include cases of determining human behavior, human health and treatment and predicting it for the future, independently of one’s own free will. At the same time, AI systems intended to be used by law enforcement and enforcement authorities to carry out individual risk assessments of natural persons in order to calculate the risk of a natural person committing criminal offences or to predict the occurrence or recurrence of an actual or potential criminal offence based on a natural person’s profile or the assessment of characteristics and personality based on a previous criminal behavior, could lead to the subordination of police action and judicial power to the results of artificial intelligence, thus objectifying the person under consideration.

Dr. Maria- Oraiozili Koutsoupia, Rythmisis President