Ensuring individuals can fully exercise their rights
Promoting transparency and rights for end-users.
Measuring the impact of processing on the fundamental rights of individuals
Some processing operations involving AI systems may be particularly intrusive or have significant consequences for individuals in terms of their scale, the characteristics of the data subjects or other specific aspects.
In such a context, it is all the more important to preserve the rights and freedoms of individuals.
Have the consequences of the processing on the fundamental rights of individuals (rights to freedom of expression, freedom of thought, conscience and religion, freedom of movement, respect for private and family life, etc.) been taken into account?
Have the consequences for groups of individuals based on gender, origin, political or religious opinions, for society and democracy in general, been considered?
Has a formal framework been used to carry out this impact assessment (DPIA, other analysis grid)?
Informing individuals so they can regain control
The continuing unknowns about the potential flaws in algorithms and their impact on individuals and society call for caution and for allowing individuals to regain control of their data.
Have individuals been informed about the processing in a clear and concise manner?
How are they informed?
Is the information easily accessible?
Are individuals informed about the data collection, whether collected directly from them or otherwise?
Are individuals aware that they are interacting with a machine?
In cases where users interact with a machine (bot) or automatically generated content (e.g. deepfake), is this clearly indicated to the user?
Is the information provided specifically, or through the usage context?
If the data controller is a French administration, is the informing of individuals regarding algorithmic processing to which they may be subject provided for in accordance with the Code of Relations between the Public and the Administration and the Law for a Digital Republic?
If the data controller is an administration, is the source code of the algorithm made public a priori?
If not, will it be sent to those who request it?
Has the impact on individuals' mental health and behaviour been studied?
Is the use of methods to influence individuals' choices taken into account?
Could the system lead to an exacerbated use of a product, similar to an addiction?
Could it facilitate harassment?
Providing a framework conducive to the exercising of data protection rights
It is the responsibility of the data controller to put in place the necessary measures to ensure that the data subjects of the processing operation can fully exercise their rights.
Which measures enable individuals to exercise their rights?
Are they informed of how these rights can be exercised and with whom?
Can the right to object to the processing of one's data be easily exercised by the individual for both the training and production phases? Can individuals exercise this right at any time (before and after the data is collected)?
Can the right to erasure and the right of access be easily exercised by the individual?
In particular, if the AI model has a risk of re-identification or membership inference, causing the parameters of the model to be classified as personal data, how can these rights be exercised by the individual?
If the AI model’s parameters contain certain key data points from the training set, can these rights be exercised by the individual on the parameters? What measures will be taken in that case?
Can the right to restriction of processing and the right to rectification be easily exercised by the individual?
In particular, will it be possible to verify the integrity of the individual's data through logging?
If the AI system is used to profile individuals, is the profile predicted by the system used by way of indication during processing, or is it an outcome in itself? If the profile is an outcome in itself, can the right to rectification be easily exercised?
Supervising automated decisions
Where a decision is based exclusively on automated processing and may have consequences for individuals, Article 22 of the GDPR and Article 47 of the French Data Protection Act apply. These articles establish that, with certain exceptions, every individual has the right not to be subject to a decision based solely on automated processing, including profiling, where that decision could have legal effects concerning or significantly affecting that individual.
This means that when none of the anticipated exceptions are verified, human supervision confirming or replacing the automated decision is necessary.
Does the system lead to a decision based exclusively on automated processing, by design or in practice, and producing legal effects concerning or similarly significantly affecting an individual?
Although some systems do not appear to have an effect on individuals, could there be consequences in practice (e.g. a CV sorting algorithm for recruitment that orders some CVs at the end of a queue too long for a human to be able to check the relevance of the decision made by the algorithm)?
What is the legal base for making automated decisions?
Can individuals easily object to a decision based exclusively on automated processing?
If so, how?
Do individuals who choose not to use the AI system under Article 22 have the same benefits and opportunities as those using the system?
No information is collected by the CNIL.
Would you like to contribute?
Write to ia[@]cnil.fr