AI model evaluation for the European Commission
Selected by the European AI Office to evaluate the manipulation risks of general-purpose AI models, as part of the implementation of the AI Act.
Official evaluator for the AI Act

the risks posed by frontier AI
Selected by the European AI Office to evaluate the manipulation risks of general-purpose AI models, as part of the implementation of the AI Act.
Official evaluator for the AI Act

BELLS, notre benchmark pour mesurer la capacité des systèmes d'IA à détecter les comportements problématiques d'autres modèles.
Adopted by the UK AI Safety Institute
Every week, CeSIA analyzes the events shaping the future of AI: technological advances, policy decisions, safety challenges. Independent analysis to understand what's at stake.
AI safety research, policy recommendations and updates on our work.



institutions in addressing AI challenges
CeSIA initiated and co-leads an international call for the establishment of red lines in AI development.
As seen in

CeSIA engages in all major international forums and regularly runs workshops and side events to unite AI safety stakeholders.
Highlighted events

Yoshua Bengio, Stuart Russell and Dragos Tudorache gathered for a dialogue on the safety challenges of the most advanced AI systems.

With Yoshua Bengio and Reporters Without Borders, 15 leading journalists gathered on AI challenges for the press.
From CNIL to the Ministry of Defence, from DGE to the Office of the Minister for Artificial Intelligence: CeSIA supports the full range of institutions shaping French AI policy.

Some of our contributions were adopted verbatim in the Code of Practice for general-purpose AI models published by the European AI Office.
on AI safety issues
Sciences Po, ENS Ulm, ENS Paris-Saclay.
A comprehensive, regularly updated guide to understanding and mitigating risks from advanced AI systems.
Eight-day in-person bootcamp, with a technical programme and a governance & strategy programme. Originally a CeSIA programme, now run by a partner organization.
Working on AI issues and want to connect with our team? Get in touch.