Public sector AI use increases, transparency doesn’t improve: Odap publishes the new edition of their citizen inventory
Odap is publishing in November 2025 the second edition of its citizen inventory of public sector algorithms ⤤ , which documents algorithms and artificial intelligence systems used in French public services. The inventory was carried out collaboratively with over forty volunteer contributors. Its results demonstrate both the rise of AI in administrations and the persistent opacity around these systems. Very little information is currently communicated by administrations, particularly regarding budgets and evaluations.
To get in touch: contact@odap.fr .
Results of the second edition: more and more AI in public services, still as much opacity
The inventory now documents 120 systems. By nature, it is not exhaustive and is intended to be updated over time.
Contributors, regardless of their level of expertise on public algorithms, reported persistent difficulties in finding information about algorithms and AI systems. In some cases, it’s even difficult to understand what the purpose of an AI system is, and its role in the decision-making process. While many agencies have announced projects to accelerate AI adoption through “strategic partnerships” with private companies, very little information is available to date on the concrete aspects of these partnerships.
Opacity persists:
- A lack of published evaluations: only 13 algorithms have been the subject of internal evaluations published by the administration, barely 10%.
- Scarce information on the cost of systems: we only have financial information for 18 algorithms (15% of systems). Information is often partial, for example over one year or on part of the budgets.
- 18 algorithms do not comply with their legal transparency obligations: algorithms that enable administrative decisions to be made are subject to enhanced transparency obligations. Of the 24 algorithms listed in the inventory that are subject to the obligation to publish rules online provided for by French law, we estimate that 18 do not comply with it.
Generative AI is breaking into the public sector, with numerous chatbots (conversational agents) deployed as information search and writing assistance tools, as well as transcription and translation tools.
Noteworthy algorithms, in their purpose or scope:
- Parole: software for transcribing hearings of child victims interviewed by law enforcement.
- The risk-scoring algorithm used by CNAM (national health insurance fund) to detect potential fraud among recipients.
- Several systems developed by France Travail (the French unemployment agency), including the employability score, ChatFT and MatchFT. While the agency maintains a public list of its algorithms (to comply with legal requirements), these crucial systems are actually not documented.
Calls to action
- Public agencies must be transparent about their uses of AI, their costs, who designs them, and their results. Transparency is a necessary prerequisite (even if not sufficient) for the implementation of these systems, to allow debate on the choices that are made and to realize the true performance and costs of these systems… and, sometimes, to refuse to use an algorithm altogether.
- The subject of artificial intelligence in the public sector is not a technical issue: it’s deeply political, and everyone is legitimate to take it on. Unions, civil society organizations, collectives, journalists, concerned citizens: get in touch if this is something you want to work on with us.
- We must pay particular attention to partnerships concluded between the administration and private companies, especially for the development of generative AI.
Context: Odap and the citizen inventory of public algorithms
The Observatory of public sector algorithms (Odap) is a French civil society organization that creates and gathers information on algorithms and artificial intelligence systems used by French government agencies, to make them more transparent and contribute to their independent evaluation.
The inventory documents algorithms and AI systems used by the central government, which includes ministries and state agencies. Its objectives:
- Offer an overview of algorithms used by government agencies, centralizing information so that it is useful to associations and citizen, professional, activist or research collectives.
- Show that algorithms are political, by highlighting the importance of choices and contexts that influence how they work.
To do this, it relies on public information from:
- The agencies themselves, in strategic and communication documents;
- Other institutional sources: reports from the Court of Auditors, the Defender of Rights;
- External sources: researchers, journalists, associations, unions that have investigated these systems.
The inventory goes beyond the technical, and also documents the design and development context: administrations carrying the project, partners (especially private companies), budgets, evaluations.
The dataset is available for download on the French open data portal ⤤ .
New in this second edition
- A collaborative process: over forty volunteer contributors worked together to document the algorithms. The call for contributions was open to all levels of expertise.
- A new interface, making it easier to visualize the systems.