Policy Brief April 03, 2026

CeSIA Testified Before OPECST at the French National Assembly

By Arthur Grimonpont

Arthur Grimonpont à l'Assemblée Nationale
Arthur Grimonpont at French National Assembly

On 2 April 2026, Arthur Grimonpont, Head of Advocacy at CeSIA, was heard by the Parliamentary Office for the Evaluation of Scientific and Technological Choices (OPECST) as part of a session on the diffusion of AI innovations into everyday use.

Before members of parliament, Arthur set out an assessment of the "systemic" risks posed by AI development: information manipulation, biological risks, cyberattacks, and loss of control. These risks are growing more acute as AI capabilities advance, in the absence of adequate safeguards.

"These companies are deploying systems to hundreds of millions of people — systems they themselves acknowledge have a significant chance of being weaponised for large-scale malicious purposes, or even of slipping beyond their control — and we are supposed to worry about the uncertainty that regulation creates for their business model. The uncertainty that should concern us is technological, not regulatory." — Arthur Grimonpont, Head of Advocacy at CeSIA

Three concrete recommendations were put to OPECST: strict enforcement of existing law (the Digital Services Act and the AI Act), making large-scale AI investment projects conditional on safety requirements, and supporting the development of a binding international agreement on the most dangerous uses of AI.

The full hearing is available to watch in the video below, along with CeSIA's position paper submitted to OPECST.