A short introduction to accountability in machine-learning algorithms under the GDPR

30212411048_96d9eea677_o

 by Andreia Oliveira, Master in EU Law (UMINHO)
 and Fernando Silva, Consulting coordinator - Portuguese Data  Protection National Commission

Artificial Intelligence (AI) can be defined as computer systems designed to solve a wide range of activities, that are “normally considered to require knowledge, perception, reasoning, learning, understanding and similar cognitive abilities” [1]. Having intelligent machines capable of imitating human’s actions, performances and activities seems to be the most common illustration about AI. One needs to recognise AI as being convoluted – thus, machine learning, big data and other terms as automatization must hold a seat when discussing AI.  Machine learning, for example, is defined as the ability of computer systems to improve their performance without explicitly programmed instructions: a system will be able to learn independently without human intervention [2]. To do this, machine learning develops new algorithms, different from the ones that were previously programmed, and includes them as new inputs it has acquired during the previous interactions.

The capabilities of machine learning may put privacy and data protection in jeopardy. Therefore, ascertaining liability would be inevitable and would imply the consideration of inter alia all plausible actors that can be called upon account.

Under the General Data Protection Regulation (GDPR), the principle of accountability is intrinsically linked to the principle of transparency. Transparency empowers data subjects to hold data controllers and processors accountable and to exercise control over their personal data. Accountability requires transparency of processing operations, however transparency does not constitute accountability [3]. On the contrary, transparency acts as an accountability’ helper – e.g. helping to avoid barriers, such as opacity.
Continue reading “A short introduction to accountability in machine-learning algorithms under the GDPR”

Algorithm-driven collusion

grid-786084_960_720

 by Virgílio Pereira, collaborating member of CEDU

It has been said that digital markets are new and different.[i]  Indeed, competition enforcement reforms have already begun their journey, tackling the unorthodox dynamic of digital markets. Examples include the reforms taking place in Germany.[ii] They have entailed, among others, the possibility of setting up a digital agency, responsible for the supervision of digital markets, whose tasks would include dispute resolution in competition issues.[iii] Becoming vigilant and gathering know-how is certainly a valuable starting point.

Recently, the Council adopted the Commission’s proposal intended to empower Member States’ competition authorities to be more effective enforcers.[iv] It includes reinforcing competition authorities’ investigative powers, including their power to collect digital evidence. Discussion on the unorthodoxy of digital markets and challenges arising from them should take place within the context of the implementation of the Directive, or more generally, within the context of the European Competition Network.
Continue reading “Algorithm-driven collusion”