A short introduction to accountability in machine-learning algorithms under the GDPR

30212411048_96d9eea677_o

 by Andreia Oliveira, Master in EU Law (UMINHO)
 and Fernando Silva, Consulting coordinator - Portuguese Data  Protection National Commission

Artificial Intelligence (AI) can be defined as computer systems designed to solve a wide range of activities, that are “normally considered to require knowledge, perception, reasoning, learning, understanding and similar cognitive abilities” [1]. Having intelligent machines capable of imitating human’s actions, performances and activities seems to be the most common illustration about AI. One needs to recognise AI as being convoluted – thus, machine learning, big data and other terms as automatization must hold a seat when discussing AI.  Machine learning, for example, is defined as the ability of computer systems to improve their performance without explicitly programmed instructions: a system will be able to learn independently without human intervention [2]. To do this, machine learning develops new algorithms, different from the ones that were previously programmed, and includes them as new inputs it has acquired during the previous interactions.

The capabilities of machine learning may put privacy and data protection in jeopardy. Therefore, ascertaining liability would be inevitable and would imply the consideration of inter alia all plausible actors that can be called upon account.

Under the General Data Protection Regulation (GDPR), the principle of accountability is intrinsically linked to the principle of transparency. Transparency empowers data subjects to hold data controllers and processors accountable and to exercise control over their personal data. Accountability requires transparency of processing operations, however transparency does not constitute accountability [3]. On the contrary, transparency acts as an accountability’ helper – e.g. helping to avoid barriers, such as opacity.
In 1980, the OECD adopted the Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data (hereafter Guidelines) to address concerns arising from the increased use of personal data and avoid the risk of restrictions to the flow of information across borders. These Guidelines were the first internationally agreed-upon set of privacy principles.

Under OECD guidelines, accountability was introduced as a distinct principle. Paragraph 14 provides that: “A data controller should be accountable for complying with measures which give effect to the principles stated above”.

This principle is strengthened through its Explanatory Memorandum at Paragraph 62, where “the data controller decides about data and data processing activities. It is for his benefit that the processing of data is carried out. Accordingly, it is essential that under domestic law accountability for complying with privacy protection rules and decisions should be placed on the data controller who should not be relieved of this obligation merely because the processing of data is carried out on his behalf by another party, such as a service bureau. On the other hand, nothing in the Guidelines prevents service bureaux personnel, “dependent users” […] and others from also being held accountable. For instance, sanctions against breaches of confidentiality obligations may be directed against all parties entrusted with the handling of personal information […]. Accountability under Paragraph 14 refers to accountability supported by legal sanctions, as well as to accountability established by codes of conduct, for instance” (emphasis added).

Given the background, accountability implies an actor to be called upon to answer for something. The role of accountability seems to be the need of allocating responsibility [4].

In order to determine liability in AI, one would need to consider numerous plausible actors to be called upon account – e.g. software builders, hardware builders, CEOs of companies that sell products embodied with AI, researchers, data mining scientists, DPOs, and, without exhausting every possibility, users. The following example may be helpful to understand it better: a self-driving car has the perception of an imminent accident that cannot be avoided and due to the environment circumstances, heavily populated area, it shall choose only from two possible options: harm pedestrians or harm people inside the car. To this case, one could think about a general liability. However, if we consider that this self-driving car made its own decision in accordance, for instance, to the status of the people around it, other questions can be raised – e.g. had the car access to rating citizens’ data without properly authorisation? Was it based on a person’s profile? Assessing accountability, besides being compound, would have several other further points to consider. For instance, in the event of an accident, where the profile of the actors involved was assessed by processing personal data, who would be held accountable? The company that designed the car embodied with AI to look for the involved data subjects rating? The car owner, who needs to bear with risk liability? The organisation that provided the data for building the profiles of the involved actors in the accident that the AI’s car used to decide? The premises that the car company inserted in the car’s AI? We do believe that one would be able to plead in favour of any side.

Notwithstanding, we are still able to point out some important considerations on accountability. First, accountability implies a set of norms to evaluate the conduct of an individual or entity. Second, accountability involves a relationship between actors where one of them has the obligation to explain or justify his conduct or actions. Finally, accountability presume the existence of possible sanctions [5].

Under the GDPR several other consideration should be taken into account as for what AI concerns:

  • The GDPR does not clarify “towards whom algorithmic decision-makers should be considered to be accountable: they only point out to whom they should be accountable when algorithmic decision-makers process personal data” [3].
  • The GDPR’s principle of accountability will only apply to algorithms when its inputs and outputs are personal data according to the definition of the Regulation [3]. Big data lodges several data, but only some of that data concerns the data subject. Which means that this principle will only apply in certain cases. Ascertaining which data is being processed is, therefore, the key point.

A closer look at Article 22, one may notice that the GDPR seems to give data subject the power to decide if he/she wants to be subjected to an automated decision: “the data subject shall have the right not to be subject to a decision based solely on automated processing […]”. In case, it is not possible, because of the expectations listed in Article 22 (2) (a) and (c), “the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision” (emphasis added).

With this in mind, it would be reasonable to ask if autonomous-machines, with no human intervention, could be regarded as controllers as they would determine the purposes of the processing. If answered in the affirmative, it would be a wonder how an average person could express his or her point of view. More importantly: how to contest a decision. To bring even more thorny points of view to the discussion, how can one be sure that a machine is processing personal data and how to give or withdraw consent?

Under the GDPR, “a Controller means the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data […]”. Could we consider that an intelligent machine falls into the scope of this definition? The answer would have arguments on both sides, however we might expect to find ourselves compelled to consider it possible.

The European Parliament’s Committee on Legal Affairs’ report can provide some preliminary insight as it questions whether robots “should be regarded as natural persons, legal persons, animals or objects – or whether a new category should be created.” [5] If one could believe that a machine’s natural personhood could be recognized, and if therefore an artificial intelligent agent is to be considered a controller, this would mean it would have not only rights and obligations, but also be entitled to be accountable. But then, would it not be against the ratio legis of allocating responsibility?

The GDPR, in terms of accountability, only shows to actors how they might act in order to be compliant. Algorithms designed with learning capabilities are challenging the traditional concept of responsibility and regulatory frameworks. Algorithms that can programme themselves, leads to the point where “nobody has enough control over the machine’s actions to be able to assume responsibility for them”[6].

Under the GDPR, a DPIA – Data Protection Impact Assessment – is mandatory when processing is “likely to result in a high risk to the rights and freedoms of natural persons”. Carrying out a DPIA could be “a first step towards having a more accountable algorithmic culture” as “the controller himself would first have to assess the desirability of algorithmic decision-making”[3]. However, this can be a challenge when we are dealing with machine learning capabilities: there is no control over the algorithms created, as sometimes it is unknown for which purposes they have been created.

[1] Saloky, T., Šeminský, J.: Artificial Intelligence and Machine Learning. (2005).

[2] Artificial intelligence and privacy Report, January 2018.

[3]  Vedder, A., Naudts, L.: Accountability for the Use of Algorithms in a Big Data Environment. International Review of Law, Computers & Technology. 31, (2017).

[4] Alhadef, J., Van Alsenoy, B., Dumortier, J.: The accountability principle in data protection regulation: origin, development and future directions. In: Guagnin, D., Hempel, L. and Ilten, C. (ed.) Managing Privacy through Accountability,. Palgrave Macmillan (2012).

[5]  European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).

[6] Matthias, A.: The responsibility gap : ascribing responsibility for the actions of learning automata. Ethics and Information Technology. 6, (2004).

Picture credits: Artificial intelligence… by Mike MacKenzie (vpnsrus).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s