Our work : Analyze, Support, Educate

Analyze

the risks posed by frontier AI

AI model evaluation for the European Commission

Selected by the European AI Office to evaluate the manipulation risks of general-purpose AI models, as part of the implementation of the AI Act.

Commission européenne logo

Official evaluator for the AI Act

AI model evaluation for the European Commission

Evaluating the reliability of AI oversight systems

BELLS, notre benchmark pour mesurer la capacité des systèmes d'IA à détecter les comportements problématiques d'autres modèles.

UK AISI logo

Adopted by the UK AI Safety Institute

Support

institutions in addressing AI challenges

Global Call for AI Red Lines

CeSIA initiated and co-leads an international call for the establishment of red lines in AI development.

As seen in

El Pais logo Le Monde logo NBC News logo BBC logo The New York Times logo
Global Call for AI Red Lines
300+ Personnalités signataires
15 Prix Nobel et Turing
11 Anciens chefs d'État

Bringing the debate to international forums

CeSIA engages in all major international forums and regularly runs workshops and side events to unite AI safety stakeholders.

Highlighted events

Grand Symposium on AI Safety — Sorbonne University

Grand Symposium on AI Safety — Sorbonne University

Yoshua Bengio, Stuart Russell and Dragos Tudorache gathered for a dialogue on the safety challenges of the most advanced AI systems.

AI & Media Masterclass — ENS Ulm

AI & Media Masterclass — ENS Ulm

With Yoshua Bengio and Reporters Without Borders, 15 leading journalists gathered on AI challenges for the press.

Supporting French institutions in addressing AI risks

From CNIL to the Ministry of Defence, from DGE to the Office of the Minister for Artificial Intelligence: CeSIA supports the full range of institutions shaping French AI policy.

Supporting French institutions in addressing AI risks

GPAI Code of Practice

Some of our contributions were adopted verbatim in the Code of Practice for general-purpose AI models published by the European AI Office.

Educate

on AI safety issues

Teaching at top universities

Sciences Po, ENS Ulm, ENS Paris-Saclay.

3 cours accrédités dans les grandes écoles

The first comprehensive textbook on AI safety

A comprehensive, regularly updated guide to understanding and mitigating risks from advanced AI systems.

1 000+ étudiants ont suivi un cours avec l'Atlas

International training program

Eight-day in-person bootcamp, with a technical programme and a governance & strategy programme. Originally a CeSIA programme, now run by a partner organization.

200+ formés depuis 2024
98 % de satisfaction

Get in touch

Working on AI issues and want to connect with our team? Get in touch.