Editorial of January 2019

Property Intellectual Copyright Symbol Protection

 by Alexandre Veronese, Professor at University of Brasília


Article 13 and the vigilance dilemma

The first US battles about filtering

In light of the worldwide ongoing debate surrounding legal regimes over internet, in special the recent controversies on amendments proposals to applicable EU rules, such as Directive 96/9, Directive 2001/29 or Directive 2012/28, but most notably Article 13 of the (soon-to-be) Directive on Copyright in the Digital Single Market, it is of utmost importance to seek some perspective. The topic is relevant as much as complex with a range of aspects to consider. For instance, one of the approaches the EU is giving to the matter involves the use of internet (or digital tools in general) for new cultural purposes following the celebration in 2018 of the European Year of Cultural Heritage. In that regard, I had the opportunity to reflect upon this debate alongside Professor Alessandra Silveira, editor of the Blog of UNIO, and other colleagues in an excellent Portuguese podcast. In this post, I intend to shed some light in the global depth of the matter by analysing the American inaugural experience.

At the beginning of the widespread usage of the Internet, the United States society was immersed in a debate about how to deal with offensive content. In the 1990s, Internet had no boundaries and no firewalls to prevent the incoming waves of pornographic and unusual materials. Quickly, a political movement made a strong statement in order to protect American families from that threat. In 1996, the US Congress passed a bill named Communications Decency Act, also known as the CDA. The Bill was signed into Law by the former President Bill Clinton. The CDA was intended to provide an effective system to take down offensive content. Some of the founders of the Internet launched a campaign against the CDA. The now widely famous Electronic Frontier Foundation was the spearhead of the resistance. Until today, we remember the Declaration of Freedom in the Internet, which was written by John Perry Barlow. The major weapon of the resistance was the First Amendment of the US Constitution. Some lawsuits were filled and in a brief timespan the US Supreme Court took down the CDA for it was ruled as unconstitutional. The Supreme Court maintained the long-aged interpretation that the State must be out of any action to perform any possible kind of censorship (Reno v. ACLU, 1997).

What was the alternative route that could be taken to provide an effective instrument to ban offensive materials from the reach of the children? A massive debate was raised. The most popular solution was to create systems to filter material in a way take they could foster the freedom of choice of the citizens. Nevertheless, that was a remedy with some side effects. The possibility to filter undesired material could also create a personal firewall the would limit the variety of available information. Lawrence Lessig, a very well-known law professor, stood against that option and stated:

“In my view, the government has no legitimate interest, consistent with the First Amendment, in facilitating or pushing technologies that facilitate general rather than narrow content discrimination; the most that the First Amendment can permit, I argue, are regulations that facilitate discrimination in a narrowly drawn sphere. This is not to argue that it would be unconstitutional if the net became a place where general discrimination were possible; it may well become that place, but that’s a different point. My claim is only that the government’s role in facilitating generalized content discrimination is quite narrow, and that we should select strategies for advancing its legitimate interests that don’t easily generalize to this broader control”[1].

Other legislative initiatives emerged in that period, like the COPA – Child Online Protection Act (1998) – and the CPPA – Child Pornography Prevention Protection Act (1996). Both were brought down by the Supreme Court in 2002. The first was ruled unconstitutional in Ashcroft v. ACLU and the latter in Ashcroft v. Free Speech Coalition.

These three cases show that the US tradition of sustaining freedom of speech against state actions prevented the shrinking of the Internet public sphere over many years. But that was just the first battle of the content wars. Soon the media industry recognized the risks posed by the new means of transmission of cultural intellectually protected products. Two large scale bills were set in march in the US Congress around 2011 and 2012: the SOPA and the PIPA. The first was the acronym for Stop Online Piracy Act. The latter meant Protect Intellectual Property Act. Again, a huge campaign was waged all over the world against those changes. The activism was fruitful since those bills never became law.

The contemporary legal standard for taking over material due to copyright infringement in the US and the proposal of new directive to the copyrights in the EU Digital Single Market

The legal standards for taking down hyperlinks or material that violates intellectual property are set by the Title 17 of the US Code. In 1996, the World Intellectual Property Organization (WIPO) approved the WCT – WIPO Copyright Treaty. This treaty was approved to provide international standards for a worldwide copyright protection. The provisions of that treaty were integrated in the US Law by the Digital Millennium Copyright Act of 1998. An important addition to Title 17 was the creation of Section 512, which has the following name: Online Copyright Infringement Liability Limitation Act (OCILLA). Therefore the § 512 regulates the online takedown of intellectual property violating material by online and internet service providers.

The § 512 grants a safe harbor for providers that had not initiated the transmission, stockage, routing or caching of any violating material to third parties. The US Law states that the providers must have designated agents to receive notices and manage the control of material in their own networks and services. Also, § 512 created a detailed process to regulate the notice and counter-notice measures that must take place in order to effectively take down content and even hyperlinks. Therefore, the US law does not determine the necessity of constant vigilance of the network and services. It merely created a system of procedures that can be triggered by clear alarms. Paragraph 2 of subsection (c) of the § 512 is written as follows:

(c) Information Residing on Systems or Networks at Direction of Users.

[…]

(2) Designated agent. The limitations on liability established in this subsection apply to a service provider only if the service provider has designated an agent to receive notifications of claimed infringement described in paragraph (3), by making available through its service, including on its website in a location accessible to the public, and by providing to the Copyright Office, substantially the following information: (A) the name, address, phone number, and electronic mail address of the agent. (B) other contact information which the Register of Copyrights may deem appropriate.

As a matter of fact, the US Law mandates that the provider must lawfully create a system to receive complaints and notices regarding the violation of intellectual property. In effect, to a certain extent EU law follows this and Directive 2000/31 has some similar provisions to the US Law. The limitation of liability of the information society services and providers are in Section 4 of the Directive, under Articles 12, 13 and 14. Also, Article 15 of the Directive made clear that the Member States could not impose a burden of monitoring the network. Clearly, the US Law does not obligate the providers to monitor their services and networks. However, Title 17 made clear the necessity of maintaining an effective channel between the providers and the intellectual property rightsholders or representatives.

The European Directive has not imposed a similar obligation of having a complaint channel. So, it seems reasonable to think that the issue was not brought under the Directive because Article 11 of the WCT already obliged most of the EU countries to provide some national system of protection to online intellectual property:

Article 11 [Obligations concerning Technological Measures] Contracting Parties shall provide adequate legal protection and effective legal remedies against the circumvention of effective technological measures that are used by authors in connection with the exercise of their rights under this Treaty or the Berne Convention and that restrict acts, in respect of their works, which are not authorized by the authors concerned or permitted by law.

What should be questioned now is: does Article 13 of the EU proposal for Copyrights try to create a similar system of the US Code, under Title 17? Or, does it seek to impose the burden of monitoring? The burden of monitoring is prohibited by Article 15 of Directive 2000/31:

Article 15 – No general obligation to monitor.

  1. Member States shall not impose a general obligation on providers, when providing the services covered by Articles 12, 13 and 14, to monitor the information which they transmit or store, nor a general obligation actively to seek facts or circumstances indicating illegal activity.
  2. Member States may establish obligations for information society service providers promptly to inform the competent public authorities of alleged illegal activities undertaken or information provided by recipients of their service or obligations to communicate to the competent authorities, at their request, information enabling the identification of recipients of their service with whom they have storage agreements.

Despite the text of the Directive 2000/31, it seems clear that the new Article 13 of the proposal obliges the providers to create, along the rightsholders, some kind of motoring system of the services:

Article 13 – Use of protected content by information society service providers storing and giving access to large amounts of works and other subject-matter uploaded by their users.

  1. Information society service providers that store and provide to the public access to large amounts of works or other subject-matter uploaded by their users shall, in cooperation with rightholders, take measures to ensure the functioning of agreements concluded with rightholders for the use of their works or other subject-matter or to prevent the availability on their services of works or other subject-matter identified by rightholders through the cooperation with the service providers. Those measures, such as the use of effective content recognition technologies, shall be appropriate and proportionate. The service providers shall provide rightholders with adequate information on the functioning and the deployment of the measures, as well as, when relevant, adequate reporting on the recognition and use of the works and other subject-matter.
  2. Member States shall ensure that the service providers referred to in paragraph 1 put in place complaints and redress mechanisms that are available to users in case of disputes over the application of the measures referred to in paragraph 1.
  3. Member States shall facilitate, where appropriate, the cooperation between the information society service providers and rightholders through stakeholder dialogues to define best practices, such as appropriate and proportionate content recognition technologies, taking into account, among others, the nature of the services, the availability of the technologies and their effectiveness in light of technological developments.

Despite the differences, Article 13(2) has the same goal of the § 512: creating a system of takedown that must be operated by the providers. Moreover, it determines an obligation to the Member States to make sure about the creation of such systems. Therefore, no direct burden is imposed directly to enterprises or users. At the same direction goes paragraph 3, which brings no clear problem, actually. On the contrary, it imposes something that is very important nowadays: the necessary multi stakeholder debate among users, rightsholders and authorities. Of course, it is an obligation that is imposed by the EU to Member States to create such kind of debate arenas.

The real problem of Article 13 is that its paragraph 1 could be read in a very different array of interpretations. One could read it as imposing the duty of monitoring to the providers, since it brings some provisions about prevention. To exercise such prevention, it seems clear that some monitoring will be needed. The US Federal Law has never demanded any kind of preventive measures on the providers, by the way. Also, paragraph 1 specifies that this preventive system may have some “content recognition technologies”. It looks like that those technologies can fall under the same problem that brought down most of the Digital Rights Management (DRM) systems: it is impossible to argue the subtle lawful application of exceptions with algorithms. Such systems can turn into preventive filtering mechanisms that could impose a shrinking effect on the availability of content to the Internet users. Moreover, any automatic system, under the EU Law – after the enactment of the General Data Protection Regulation – must have some degree of counterbalance methods. One surely can read the obligation of adequacy and proportionality as filling this gap. Nonetheless, there is some kind of vagueness in the proposal. The automatic application of rules may produce some level of wrong and unfair decisions. When a wrong decision is made by the automatic system, it is very hard to return to the status quo. Even the reinstatement of the previous situation may be futile.

Nonetheless, one thing is sure: the intermediaries (online platforms) and the producers of culture and arts must attain a new profit equilibrium between them. It seems clearly unreasonable that the platforms gain so much and that the producers gain so little or nothing at all. A great number of artists, writers and performers are endorsing that a fairer share of the profits must generated from their labour. The EU proposal surely poses a new and therefore uncertain policy. Nevertheless, that policy goal is undeniably fair: it aims to create this new equilibrium, which is a demand of the contemporary societies. We may disagree about the means; nevertheless, we all agree over the necessity of a solution.

[1] LESSIG, Lawrence. What things regulate speech: CDA 2.0 vs. filtering. Jurimetrics Journal, v, 38, n. 4, p. 629-670, 1998, p. 669.

Pictures credits: Property intellectual… by Max Pixel.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s