By Tiago Sérgio Cabral (Managing Editor)
▪
Data Governance and the AI Regulation: Interplay between the GDPR and the proposal for an AI Act
It is hardly surprising that the recent European Commission’s proposal for a Regulation on a European Approach for Artificial Intelligence (hereinafter the “proposal for an AI Act”) is heavily inspired by the GDPR. From taking note of the GDPR’s success in establishing worldwide standards to learning from its shortcomings, for example by suppressing the stop-shop mechanism (arguably responsible for some of its enforcement woes).[1]
The proposal for an AI Act should not be considered a GDPR for AI for one singular reason: there is already a GDPR for AI, and it is called the GDPR. The scope and aims of the proposal are different, but there is certainly a high degree of influence and the interplay between the two Regulations, if the AI Act is approved, will certainly be interesting. In this editorial we will address one particular aspect where the interplay between the GDPR and the AI act could be particularly relevant: data governance and data set management.
Before going specifically into this subject, it is important to know that the AI Act’s proposed fines have a higher ceiling than the GDPR’s: up to 30,000,000 euros or, if the offender is company, up to 6% of its total worldwide annual turnover for the preceding financial year (article 71(3) of the proposal for an AI Act). We should note, nonetheless, that this specific value is applicable to a restricted number of infringements, namely:
a) non-compliance with the rules on prohibited AI systems (Article 5)
b) non-compliance with the rules data governance practices and data set management (Article 10, applicable only to high-risk AI).
Providing incorrect, incomplete or misleading information to notified bodies and national competent authorities in response to a request may result in fines of up to 10,000,000 euros or 2% of the total worldwide annual turnover for the preceding financial year.
All other infringements of the proposal for an AI Act are to be sanctioned with fines of up to 20,000,000 euros or 4% of the total worldwide annual turnover for the preceding financial year.
In theory, this could mean that the AI act’s fines can, in the future, hit harder than the GDPR’s for two reasons: i) higher ceiling for the most serious fines as referred; ii) while the GDPR divides its fines between the “20 million/ 4% category” and the “10 million / 2% category”, the AI Act makes the first category the rule, with only one exception being sanctioned under the second value.
Nonetheless, it is clear data set management and data governance are considered key in the proposal for an AI Act and due to this fact infringement of these provisions may result in some of the higher fines in the proposed Regulation. And complying with the rules under article 10 of the proposal for an AI may not be an easy task. To illustrate, an analysis of paragraphs 1 to 4 is especially relevant:
1- High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5.
2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices. Those practices shall concern in particular,
(a)the relevant design choices;
(b)data collection;
(c)relevant data preparation processing operations, such as annotation, labelling, cleaning, enrichment and aggregation;
(d)the formulation of relevant assumptions, notably with respect to the information that the data are supposed to measure and represent;
(e)a prior assessment of the availability, quantity and suitability of the data sets that are needed;
(f)examination in view of possible biases;
(g)the identification of any possible data gaps or shortcomings, and how those gaps and shortcomings can be addressed.
3.Training, validation and testing data sets shall be relevant, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
4.Training, validation and testing data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used.
As we know, not all high-risk AI will be trained using personal data and if it does not, it will not fall within the scope of the GDPR. However, for AI that falls within the scope of the GDPR and of the AI Act it is not hard to foresee that infringing the abovementioned provisions of the AI Act may also entail infringements of certain provisions of the GDPR such as Article 5(1)paragraphs a, c and d, Article 25, Article 32 and Article 35 (if a DPIA was carried out, most of the abovementioned issues would also have to be addressed under data protection law). Furthermore, and depending on the specific conditions and AI, rules on legal basis, provision of information, rights of the data subject (including the right not to be subject to automated individual decision-making) could also be infringed.
Therefore, entities could, in theory be fined 30,000,000 euros or up to 6% due to infringing Article 10 of the AI Act and then a further 20,000,000 euros or up to 4% for breaching some of the abovementioned GDPR provisions.
We would argue that the likelihood of “double fines” would grow if data protection authorities are given corrective powers under the AI Act (an option that a number of Member States may opt for and that the Commission appears to favour by giving power at the EU’s level to the European Data Protection Supervisor). Of course, giving said powers to data protection supervisory authorities could also result in a repeat of the abovementioned enforcement issues (especially if they are not given the means to perform the new tasks by Member States and the EU does not act to make sure that Member States, indeed, give them the necessary means, through infringement procedures, if needed).
It is still early days and the proposal for an AI Act may change substantially during negotiations. Nonetheless, early signs point to a very interesting interplay between the various parts of the EU’s new framework on AI in which the GDPR and the AI Act will be integrated. Every actor in the AI value chain should be careful in addressing the necessary legal requirements because making a mistake may result in multiple infringements and sanctions under the various legal instruments (i.e. an infringement of one may result in an almost automatic infringement of another).
[1] The proposal does not contain a tool designed for centralized enforcement by the Commission, which could be seen as a lost opportunity and may hinder the AI Act’s efficacy in cross-border large investigations.
Pictures credits: Robot by erik_stein.