Asking the right questions before using an artificial intelligence system
Proportional integration of AI, specifying a clear objective.
Objective and proportionality
As with any collection and use of personal data, processing using AI techniques must be carried out with due regard for data protection.
When setting up an AI system, it is important to consider its objective (purpose) and the proportionality of the techniques chosen: are they strictly necessary in order to achieve this objective?
The following questions may help the data controller to assess the risks associated with the proposed system and its proportionality ahead of the design stage.
Is personal data processed by the system?
Is the objective (purpose) of the processing based on the use of an AI system clearly defined?
Are the learning and production phases of the AI system separate?
If so, is a second assessment planned for the production phase?
Are individuals who are required to interact with the AI system or about whom an automated decision will be made identified?
What are their characteristics (age, gender, physical details, etc.)?
How many of them are there?
Can the processing directly or indirectly target vulnerable individuals (e.g.: children, patients, employees)?
Will the processing result in legal, financial or physical consequences for the health, social status or safety of the individuals directly or indirectly targeted by the AI system?
Does the AI system replace another type of system for the task it is assigned?
Why does that system need to be replaced?
Does the AI system have a significant advantage (in terms of technical efficiency, cost, protection of privacy, etc.) compared to other available solutions?
Does this significant advantage outweigh the potential additional risks associated with the use of an AI system (see fact sheet 6 on the impact on fundamental rights)?
In view of your answers to the previous questions, does the use of an AI system to achieve the identified purpose seem proportionate and necessary?
Providers, users of AI systems and individuals
If the data controller is not the AI system provider, the sharing of responsibilities between these two parties must be formalised.
These responsibilities must be clear to the individuals involved in the implementation of the processing, for those whose data is processed, as well as those on whom the system will have an impact.
If personal data is collected and/or used, has a data controller been identified?
Are the legal persons in charge of AI system development, deployment and monitoring clearly defined?
Do the natural persons in charge of the development of the system have the appropriate training?
Have they been made aware of the legal, technical, ethical and moral issues of AI?
Is there an internal charter or policy governing the design and deployment of AI systems?
Are the individuals in charge of maintaining the AI system, correcting problems and involved in its operation clearly identified and known to everyone?
Is there any follow-up to ensure that this service is always available, for example during holiday periods?
No information is collected by the CNIL.
Would you like to contribute?
Write to ia[@]cnil.fr