AI systems compliance: other guides, tools and best practices
21 September 2022
To allow a more detailed evaluation of processing using AI techniques, the CNIL provides a non-exhaustive list of tools for the evaluation of AI systems.
The CNIL was not involved in writing the references listed in this sheet: readers are encouraged to consult and use these references for information, but doing so does not constitute a guarantee of compliance.
Publications and standards
- Certification of processes for AI published by the French National Laboratory of Metrology and Testing (Laboratoire National de Métrologie et d'Essais - LNE)
- Ethics guidelines for trustworthy AI (published by the High-Level Expert Group on AI or AI HLEG)
- Guidelines on automated individual decision-making and profiling by the European Commission
- AI principles and Framework for the classification of AI systems published by the OECD
- Recommendation on the ethics of artificial intelligence published by UNESCO
- Report on artificial intelligence and fundamental rights by the European Union Agency for Fundamental Rights (FRA)
- Guide to the ethics and governance of artificial intelligence for health by the World Health Organization (WHO)
- Guidance on AI and data protection by the UK data protection authority (Information Commissioner's Office - ICO)
- Audit requirements for personal data processing activities involving AI and GDPR compliance of processing that embed artificial intelligence - An introduction (published by the Spanish data protection agency:Agencia Española de Proteccion de Datos - AEPD)
- Artificial intelligence and privacy report published by the Norwegian data protection authority (Datatilsynet)
- Guide on automated decision-making in Catalonia published by the Catalan data protection authority (APDCAT)
- Australia’s artificial intelligence ethics framework published by the Australian Government Department of Industry, Science, Energy and Resources
- AI-related publications from the Federal Trade Commission (US)
- Opinion on the impact of artificial intelligence on fundamental rights of the National Consultative Commission on Human Rights (Commission nationale consultative des droits de l’homme, CNCDH)
- Recommendations of good practices for an ethics "by design" of AI solutions of the eHealth French Agency - Ministry of Health and Prevention (Agence du Numérique en Santé et ministère de la Santé et de la Prévention)
Assessment tools
- The assessment list on trustworthy AI published by the AI HLEG in application of the Ethics guidelines for trustworthy AI (see above)
- The descriptive grid of medical devices using machine learning processes published by the French National Authority for Health (Haute Autorité de Santé)
- The Ethical AI tool created by the Numeum trade association
- The work on ethics in autonomous and intelligent systems published by the IEEE
- Analysis of bias in AI systems by the NIST
- Algorithmic impact assessment tool implemented by the Government of Canada
- Work on standardisation in the area of AI published by the ISO
- Work on the AI fairness checklist and Datasheets for datasets published by the Association for Computing Machinery (ACM)
- Procedure for conducting conformity assessment of AI systems from the University of Oxford
- Responsible and trustworthy data science evaluation framework published by Labelia Labs
- Privacy Library of Threats 4 AI published by Plot4AI
Development tools
- The LIME python module (Local Interpretable Model-agnostic Explanations) developed by the University of Washington
- The SHAP python module (Shapley Additive Explanations) developed by the University of Washington
- Resources on the assessment of the societal risks of AI algorithms published by the Toulouse National Institute of Applied Sciences (Institut National des Sciences Appliquées - INSA)
- Trusted AI tools published by IBM
- Microsoft FairLearn module
- Tools published by Google's People + AI research (PAIR) team
- The website for the Inria research project on the interpretability of AI systems, HyAIAI
Would you like to contribute?
Write to ia[@]cnil.fr