Editorial of April 2020

scope-microscope-camera-experiment-sience

by Alessandra Silveira, Editor


Health-related personal data – regarding COVID-19 and digital surveillance

Article 9 of the Regulation (EU) 2016/679 – General Data Protection Regulation (hereinafter, “GDPR”) prohibits the processing of special categories of personal data, amongst them (and the ones relevant for the subject of this essay): genetic data; biometric data for the purpose of uniquely identifying a natural person; and data concerning health. However, this prohibition shall not apply if processing is necessary for the purposes of medical diagnosis; the provision of health care or treatment;  the management of health care systems; or pursuant to contract with a health professional, in accordance to point h), of Article 9/2 of GDPR and under the further conditions established in Article 9/3. In particular, the general prohibition shall not apply if the “processing is necessary for reasons of public interest in the area of public health, such as protecting against serious cross-border threats to health or ensuring high standards of quality and safety of health care and of medicinal products or medical devices”, under point i), of Article 9/2.
Continue reading “Editorial of April 2020”

Editorial of December 2019

data-protection-regulation-3413077_1920

by João Marques, member of the Portuguese Data Protection National Commission


Portuguese DPA won’t apply the country’s GDPR law

In spite of its nature[i], the GDPR leaves some room of manoeuvre to the Member States. This European legal instrument has even been called a hybrid[ii] between a directive and a regulation, precisely because there is a significant amount of issues where national legislation can in fact diverge from the general solutions the GDPR brings to the table. Although such leeway is not to be misunderstood for a “carte blanche” to the Member States, there is nevertheless a relevant part to be played by national legislators.

From the definition of a minimum legal age for children’s consent to be considered valid for its personal data to be processed (in relation to information society services), which can vary between 13 and 16 years of age, to the waiver on fines being applied to the public sector (Article 83, 7), there is a vast array of subjects left for the Member States to determine. In fact, a whole chapter of the GDPR[iii] is dedicated to these subjects, namely: Processing and freedom of expression and information (Article 85); Processing and freedom of expression and information (Article 86); Processing of the national identification number (Article 87); Processing in the context of employment (Article 88); Safeguards and derogations relating to processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes (Article 89); Obligations of secrecy (Article 90) and Existing data protection rules of churches and religious associations (Article 91).

Additionally, matters of procedural law, according to the Principle of Conferral (Article 5 of the Treaty on the European Union) are almost entirely left for Member States to regulate, with few exceptions such as the deadlines and the (in)formalities of the reply to a data subject rights request (Article 12) and, most notably, the one-stop shop procedure (instated in Article 60) and all its related and non-related issues that are undertaken by the European Data Protection Board, the new European Union Body provided by the GDPR (section 3 of Chapter VII).

The task that lied ahead of the Portuguese legislator, concerning the national reform of the Data Protection Law[iv], was therefore demanding but framed in a way that should have helped steer its drafting in a comprehensive and relatively straightforward manner[v].

The legislative procedure in Portugal took some time to be jumpstarted and it wasn’t until the 22nd of March 2018 that a proposal from the government was finally approved and forwarded to the Parliament, as this is a matter of its competence under Article 165(1)(b) of the Portuguese Constitution.
Continue reading “Editorial of December 2019”

Editorial of October 2019

13120237513_9273218bcd_o

 by Tamara Álvarez Robles, Lecturer at the University of Vigo


On the reform of national law on data protection: the special incorporation of digital rights in Spain

The reform of the Spanish Organic Law on Data Protection (LO 3/2018), to adapt it to the General Regulation of Data Protection has introduced together with the European requirements a catalogue of digital rights. Title X “Guarantee of digital rights” has meant, undoubtedly one of the biggest novelties to data protection regulations. It is composed of a set of Articles, from 79 to 97, which present, for the first time in the Spanish national legislative sphere, the new generation of digital rights[i], inter alia, right to Internet neutrality, right to digital security, right to digital education, protection of minors on the Internet, right to rectification on the Internet, right to privacy and use of digital devices in the workplace, right to digital disconnection in the workplace, right to digital testament.

The inclusion in-extremis of the present Title X, of digital rights, through amendment of the Congress of Deputies dated April 18, 2018, responds to the fundamental importance, to the ever-present and dominating reality of the Internet, which reaches all spheres of our lives. That is why, Organic Law 3/2018 in section IV of the Preamble already points to the involvement of public authorities through the provision of public policies (Article 9.2 SC) in order to make effective the catalogue of digital rights based on the Principle of Equality (Article 14 SC), stating that: “it is the responsibility of the public authorities to promote policies that make effective the rights of citizens on the Internet, promoting the equality of citizens and the groups in which they are integrated in order to possible the full exercise of fundamental rights in the digital reality”.
Continue reading “Editorial of October 2019”

Editorial of September 2019

eraser-507018_1280

 by Alessandra Silveira, Editor
 and Tiago Cabral, Master's student in EU Law at UMinho


Google v. CNIL: Is a new landmark judgment for personal data protection on the horizon?

1. In the 2014 landmark Judgment Google Spain (C-131/12), the Court of Justice of the European Union (hereinafter, “ECJ”) was called upon to answer the question of whether data subjects had the right to request that some (or all) search results referring to them are suppressed from a search engine’s results. In its decision, the ECJ clarified that search engines engage in data processing activities and recognised the data subject’s right to have certain results suppressed from the results (even if maintained on the original webpage).

2. This right encountered its legal basis on Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data (hereinafter, “Directive 95/46”) jointly with Articles 7 (respect for private and family life) and 8 (protection of personal data) of the Charter of Fundamental Rights of the European Union (hereinafter, “Charter”). In accordance with the Court’s decision, it can be exercised against search engines acting as data controllers (Google, Bing, Ask, amongst others) and does not depend on effective harm having befallen the data subject due to the inclusion of personal data in the search engine’s results. Data subject’s rights should override the economic rights of the data controller and the public’s interest in having access to the abovementioned information unless a pressing public interest in having access to the information is present.

3. Google Spain offered some clarity on a number of extremely relevant aspects such as: i) the [existence of] processing of personal data by search engines; ii) their status as data controllers under EU law; iii) the applicability of the EU’s data protection rules even if the undertaking is not headquartered in the Union; iv) the obligation of a search engine to suppress certain results containing personal data at the request of the data subject; v) the extension, range and (material) limits to the data subjects’ rights. The natural conclusion to arrive is that Google Spain granted European citizens the right to no longer be linked by name to a list of results displayed following a search made on the basis of said name.

4. What the judgment did not clarify, however, is the territorial scope of the right (i.e. where in the world does the connection have to be suppressed?). Is it a global obligation? European-wide? Only within the territory of a specific Member State? In 2018, the European Data Protection Board (hereinafter, “EDPB”) issued Guidelines on the territorial scope of the GDPR, but their focus is Article 3 of the legal instrument and therefore they offer no clarity on this issue (even if they did, they would not bind the ECJ).
Continue reading “Editorial of September 2019”

A short introduction to accountability in machine-learning algorithms under the GDPR

30212411048_96d9eea677_o

 by Andreia Oliveira, Master in EU Law (UMINHO)
 and Fernando Silva, Consulting coordinator - Portuguese Data  Protection National Commission

Artificial Intelligence (AI) can be defined as computer systems designed to solve a wide range of activities, that are “normally considered to require knowledge, perception, reasoning, learning, understanding and similar cognitive abilities” [1]. Having intelligent machines capable of imitating human’s actions, performances and activities seems to be the most common illustration about AI. One needs to recognise AI as being convoluted – thus, machine learning, big data and other terms as automatization must hold a seat when discussing AI.  Machine learning, for example, is defined as the ability of computer systems to improve their performance without explicitly programmed instructions: a system will be able to learn independently without human intervention [2]. To do this, machine learning develops new algorithms, different from the ones that were previously programmed, and includes them as new inputs it has acquired during the previous interactions.

The capabilities of machine learning may put privacy and data protection in jeopardy. Therefore, ascertaining liability would be inevitable and would imply the consideration of inter alia all plausible actors that can be called upon account.

Under the General Data Protection Regulation (GDPR), the principle of accountability is intrinsically linked to the principle of transparency. Transparency empowers data subjects to hold data controllers and processors accountable and to exercise control over their personal data. Accountability requires transparency of processing operations, however transparency does not constitute accountability [3]. On the contrary, transparency acts as an accountability’ helper – e.g. helping to avoid barriers, such as opacity.
Continue reading “A short introduction to accountability in machine-learning algorithms under the GDPR”

Trends shaping AI in business and main changes in the legal landscape

web-3706562_960_720

 

by Ana Landeta, Director of the R+D+i Inst. at UDIMA
and Felipe Debasa, Director of the ONSSTKT21stC at URJC

Without a doubt and under the European Union policy context, “Artificial Intelligence (AI) has become an area of strategic importance and a key driver of economic development. It can bring solutions to many societal challenges from treating diseases to minimising the environmental impact of farming. However, socio-economic, legal and ethical impacts have to be carefully addressed”[i].

Accordingly, organizations are starting to make moves that act as building blocks for imminent change and transformation. With that in mind, Traci Gusher-Thomas[ii] has identified four trends that demonstrate how machine-learning is starting to bring real value to the workplace. It is stated that each of following four areas provides value to an organisation seeking to move forward with machine-learning and adds incremental value that can scale-up to be truly transformational.
Continue reading “Trends shaping AI in business and main changes in the legal landscape”

Internet, e-evidences and international cooperation: the challenge of different paradigms

hacking-2077124_960_720

 

by Bruno Calabrich, Federal circuit prosecutor (Brazil)


There is a crisis in the world today concerning e-evidences. Law enforcement authorities deeply need to access and analyze various kinds of electronic data for efficient investigations and criminal prosecutions. They need it not specifically for investigating and prosecuting so-called internet crimes: virtually any crime today can be committed via the internet; and even those which aren’t executed using the web, possibly can be elucidated by information stored on one or another node of the internet. The problem is that enforcement authorities not always, nor easily, can access these data[i], as the servers where they are stored are frequently located in a different country. Thus, international cooperation is frequently a barrier to overcome so that the e-evidence can be obtained in a valid and useful way. And, today, the differences around the world in the legal structures available for this task may not be helping a lot.

The most commonly known instruments for obtaining electronic data stored abroad are the MLATs – Mutual Legal Assistance Treaties –, agreements firmed between two countries for cooperating in exchanging information and evidences (not restricted to internet evidences) that will be used by authorities in investigations and formal accusations. The cooperation occurs from authority to authority, according to a bureaucratic procedure specified in each treaty, one requesting (where it’s needed) and the other (where it’s located) providing the data. But, in a fast-changing world, where crime and information are moving even faster, the MLATs are not showing to be the fastest and efficient way.  In Brazil, for instance, the percentage of success in the cooperation with the United States through its MLAT roughly reaches 20% of the cases. Brazil, US and other countries do not seem to be satisfied with that.
Continue reading “Internet, e-evidences and international cooperation: the challenge of different paradigms”

Editorial of January 2019

Property Intellectual Copyright Symbol Protection

 by Alexandre Veronese, Professor at University of Brasília


Article 13 and the vigilance dilemma

The first US battles about filtering

In light of the worldwide ongoing debate surrounding legal regimes over internet, in special the recent controversies on amendments proposals to applicable EU rules, such as Directive 96/9, Directive 2001/29 or Directive 2012/28, but most notably Article 13 of the (soon-to-be) Directive on Copyright in the Digital Single Market, it is of utmost importance to seek some perspective. The topic is relevant as much as complex with a range of aspects to consider. For instance, one of the approaches the EU is giving to the matter involves the use of internet (or digital tools in general) for new cultural purposes following the celebration in 2018 of the European Year of Cultural Heritage. In that regard, I had the opportunity to reflect upon this debate alongside Professor Alessandra Silveira, editor of the Blog of UNIO, and other colleagues in an excellent Portuguese podcast. In this post, I intend to shed some light in the global depth of the matter by analysing the American inaugural experience.

At the beginning of the widespread usage of the Internet, the United States society was immersed in a debate about how to deal with offensive content. In the 1990s, Internet had no boundaries and no firewalls to prevent the incoming waves of pornographic and unusual materials. Quickly, a political movement made a strong statement in order to protect American families from that threat. In 1996, the US Congress passed a bill named Communications Decency Act, also known as the CDA. The Bill was signed into Law by the former President Bill Clinton. The CDA was intended to provide an effective system to take down offensive content. Some of the founders of the Internet launched a campaign against the CDA. The now widely famous Electronic Frontier Foundation was the spearhead of the resistance. Until today, we remember the Declaration of Freedom in the Internet, which was written by John Perry Barlow. The major weapon of the resistance was the First Amendment of the US Constitution. Some lawsuits were filled and in a brief timespan the US Supreme Court took down the CDA for it was ruled as unconstitutional. The Supreme Court maintained the long-aged interpretation that the State must be out of any action to perform any possible kind of censorship (Reno v. ACLU, 1997).
Continue reading “Editorial of January 2019”

The US CLOUD Act and EU Law

37845654022_5f25c5d30d_o

 by Alexandre Veronese, Professor at University of Brasília

In March 2018, the President of the United States of America signed into Law a Bill approved by the Congress, which amended two parts of the US Code, the consolidation of the federal statutory norms of the country. The Clarifying Lawful Overseas Use of Data Act – CLOUD Act – was the third version of two preceding bills. Those prior bills tried to solve a grave contemporary issue: the difficulty to access electronic data that could be necessary to criminal investigations and prosecution. The new CLOUD Act changes mainly two passages of the US Code. It creates the possibility that the United States and foreign countries could sign executive agreements to grant mutual assistance in order to authorize the gathering of overseas data. In addition, the CLOUD Act creates standards to those agreements.

The United States of America have a long standing right to due process of law entrenched in the Fourth Amendment of its Constitution. The debate about the limits to access information captured by the means of new ways of communication is rather old in the US. The Federal Wiretap Act came to the US Code amidst the Omnibus Crime Control and Safe Streets Act of 1968. It was a huge alteration of the Title 18 of US Code, which is the Crimes and Criminal Procedures federal statutory law. Therefore, the federal statutory law received provisions that could regulate the lawful wiretapping in criminal investigations and the use of them between agencies and jurisdictions. Notwithstanding, the passing of time and the evolution of technologies showed the aging of those legal norms. A lot of the information that matters to seize, in order to archive effective evidence to use in investigations, came to be electronic. It was necessary to modify the Wiretap Act and, in 1986, it came the Electronic Communications Privacy Act. The new Act modernized the Law and it regulated the criminal features related to stored electronic information – the Stored Communications Act. The Patriot Act (2001 and 2006) brought to light some provisions regarding to overseas information that were made more detailed with the amendments signed into law in 2008.
Continue reading “The US CLOUD Act and EU Law”

Editorial of July 2018

artificial-intelligence-698122_960_720

 by Alessandra Silveira, Editor 
 and Sophie Perez Fernandes, Junior Editor


Artificial intelligence and fundamental rights: the problem of regulation aimed at avoiding algorithmic discrimination

The scandal involving Facebook and Cambridge Analytica (a private company for data analysis and strategic communication) raises, among others, the problem of regulating learning algorithms. And the problem lies above all in the fact that there is no necessary connection between intelligence and free will. Unlike human beings, algorithms do not have a will of their own, they serve the goals that are set for them. Though spectacular, artificial intelligence bears little resemblance to the mental processes of humans – as the Portuguese neuroscientist António Damásio, Professor at the University of Southern California, brilliantly explains[i]. To this extent, not all impacts of artificial intelligence are easily regulated or translated into legislation – and so traditional regulation might not work[ii].

In a study dedicated to explaining why data (including personal data) are at the basis of the Machine-Learning Revolution – and to what extent artificial intelligence is reconfiguring science, business, and politics – another Portuguese scientist, Pedro Domingos, Professor in the Department of Computer Science and Engineering at the University of Washington, explains that the problem that defines the digital age is the following: how do we find each other? This applies to both producers and consumers – who need to establish a connection before any transaction happens –, but also to anyone looking for a job or a romantic partner. Computers allowed the existence of the Internet – and the Internet created a flood of data and the problem of limitless choice. Now, machine learning uses this infinity of data to help solve the limitless choice problem. Netflix may have 100,000 DVD titles in stock, but if customers cannot find the ones they like, they will end up choosing the hits; so, Netflix uses a learning algorithm that identifies customer tastes and recommends DVDs. Simple as that, explains the Author[iii].
Continue reading “Editorial of July 2018”