by Piedade Costa de Oliveira (Former official of the European Commission - Legal Service) Disclaimer: The opinions expressed are purely personal and are the exclusive responsibility of the author. They do not reflect any position of the European Commission
▪
The use of algorithms for automated decision-making, commonly referred to as Artificial Intelligence (AI), is becoming a reality in many fields of activity both in the private and public sectors.
It is common ground that AI raises considerable challenges not only for the area for which it is operated in but also for society as a whole. As pointed out by the European Commission in its White Paper on AI[i], AI entails a number of potential risks, such as opaque decision-making, gender-based bias or other kinds of discrimination or intrusion on privacy.
In order to mitigate such risks, Article 22 of the GDPR confers on data subjects the right not to be subject to a decision based solely on automated processing which produces legal effects concerning them or similarly significantly affects them[ii].
It is also worth noting that the European Court of Justice (ECJ) has established requirements for the use of algorithms in the context of automated processing of air Passenger Name Record data (PNR data). Indeed, in its Opinion A-1/15[iii], delivered on 26 July 2017, the ECJ noted that the automated analyses of PNR data necessarily involve some margin of error risk. It then considered that the pre-established models and criteria should be specific, reliable and non-discriminatory. It added that any positive result obtained following the automated processing of PNR data must be subject to an individual re-examination by non-automated means before an individual measure adversely affecting the air passengers concerned is adopted.
In our view, even if the Court’s assessment concerns the automated processing of PNR data as envisaged in the then PNR-Canada draft agreement, the above requirements may extend to uses of algorithms in other contexts. In this regard, we refer to recital 71 of the GDPR which states:
“[…] In any case, such [automated] processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision. […]”
Against this background, we have been following with interest the intense debate and controversy in the United Kingdom (UK) concerning the attribution of the Advanced-Level (A-Level) grades to students in the last school year.
As an effect of the COVID-19 pandemic end-of-year students were unable to sit their final exams. In normal circumstances, those are determinant in the secondary school system for students looking for the next step in their education, such as access to universities. This year A-Level students were graded by Ofqual, the national exam regulator, and the grades awarded appear to have been generated by an algorithm. Ofqual was in charge of the standardisation model, which generated a prediction of the grade in each subject, rankings were compared with every other student at school and within the same estimated grade as provided by teachers’ assessments. According to public sources, the most weighted element was the student’s performance in each subject over the previous three years.
The outcomes were highly controversial and generated a lot of stress for students and their families. It appeared that 40% of school-assessed grades were adjusted downward across England. There were similar trends in Wales and Northern Ireland, as well as in Scotland. It also appeared that students from more disadvantaged backgrounds scored worse and that those from private schools fared far better, with twice as many top grades being awarded to the latter over students in public sector schools[iv].
On 14 August, the Information Commissioner’s Office (ICO) published the following statement:
“We understand how important A-level results and other qualifications are to students across the country. When so much is at stake, it’s especially important that their personal data is used fairly and transparently. We have been engaging with Ofqual to understand how it has responded to the exceptional circumstances posed by the COVID-19 pandemic, and we will continue to discuss any concerns that may arise following the publication of results.
“The GDPR places strict restrictions on organisations making solely automated decisions that have a legal or similarly significant effect on individuals. The law also requires the processing to be fair, even where decisions are not automated. »
At first, Gavin Williamson, the Education Secretary supported the system. Indeed, in an interview with the BBC reported on 12 August he said the exams system was fair and robust[v]. However, after angry protests by pupils and an outcry from teachers, MPs, academics and parents, the Education Secretary, and Ofqual have apologised to students and their parents, and they announced that all A-level and General Certificate of Secondary Education (GCSE) results would be based on teacher-assessed grades[vi].
There are lessons to be learned from this example, especially by public sector bodies. First, where algorithms are used to evaluate personal aspects relating to an individual, this qualifies as automated processing of personal data and the GDPR applies. Secondly, as provided for in Article 22 of the GDPR, decisions based on automated processing which produce legal effects concerning an individual or affecting him or her significantly should be subject to human intervention. To quote the GDPR, they should not be based solely on automated processing, unless one of the exceptions provided for in Article 22(2) applies. Thirdly, public sector bodies must take a very careful approach when considering the use of automated decision-making before it is deployed, especially for cases where decisions are so important as to affect an individual’s future. The legality of an automated processing may be called into question but more generally the public acceptance of the use of AI is at stake.
[i] COM(2020) 65 final; See also the Ethics Guidelines for Trustworthy AI presented in April 20&8 by the High-Level Expert Group on AI (AI HLEG) created by the European Commission.
[ii] This right does not apply in certain situations, cf. Article 22(2); See also recital 71.
[iii] Opinion A-1/15 on the draft agreement between Canada and the European Union concerning the Transfer of Passenger Name Record (PNR) data from the European Union to Canada, at paras. 171 to 174.
[iv] IAPP, « Europe data Protection Digest », received by the author on 21 August 2020
[v]BBC https://www.bbc.com/news/education-53833723?intlink_from_url=https://www.bbc.com/news/education&link_location=live-reporting-story
[vi]The Guardian, https://www.theguardian.com/education/2020/aug/17/a-levels-gcse-results-england-based-teacher-assessments-government-u-turn
Picture credits: StockSnap .
Pingback: Post-Brexit UK goes head-to-head with EU over the future of AI in Europe | The Student Lawyer