Marie-Laure Denis, president of the National Commission for Information Technologies and Liberties (CNIL), shows her vigilance regarding artificial intelligence (AI), on which the authority published information sheets before the summer internships for the companies. “If we want the French to take ownership of this technology, we must show them that their rights will be protected”, believes the State Councilor, while American digital giants, such as Meta, accuse European regulation of restricting innovation.
The guarantor of respect for private life, however, is open to adaptations and simplifications of procedures, in particular to promote health research. But it also warns about the problems related to the use of AI at work or in public services, and calls for an evaluation of the algorithmic video surveillance experiment during the Olympic Games (JO) in Paris, before perpetuating it.
Players like Meta (Facebook, Instagram) believe they should be allowed to train their AI models on content posted by users on their platforms. What do you think?
These questions are central. In fact, Meta, X or LinkedIn wanted to reuse the data published on their services. For this reason, the Irish data protection authority has begun a reflection and European countries will have to position themselves before the end of the year. This issue raises questions, particularly competitive ones, because companies could capitalize on their users’ data to establish a dominant position in AI.
More generally, AI manufacturers also train their models with third-party data accessible online, websites, etc. However, just because the data has been made public does not mean that it cannot be personal. Thus, Clearview, which trained its facial recognition software using photographs published on social networks, was sanctioned by the CNIL. It is not prohibited to train AI with personal data, but European data protection regulations must be respected. [Règlement général sur la protection des données, RGPD].
But, according to Meta, the Irish CNIL prevents it from training and launching its AI assistants in the European Union (EU), because it requires it to ask for the consent of its users, for example…
You have 81.1% of this article left to read. The rest is reserved for subscribers.