Defining disinformation in the EU: a matter beyond linguistics

Miguel Pereira (Master’s student in European Union Law at the School of Law of the University of Minho)

The EU has been a trailblazer in what regards combating disinformation. Through initiatives involving online platforms and drafting of long-term strategies tackling multiple fronts, it has recognized the issue and attempted to address it through non-regulatory policy making. The instruments that have been put forth to combat the phenomenon are often controversial (as is to be expected in all discussions impacting freedom of expression and information) and their effectiveness hard to assess. The debate surrounding these instruments tends to absorb most of the attention, leaving less room to discuss the actual definition of disinformation. This concept is, nonetheless, vital to the successful implementation of policies in this area and to an adequate protection of fundamental rights in the EU, meriting a closer look.

Disinformation is often wrongly equated to, and used interchangeably with, “fake news”. This approach muddles the debate with imprecision and can be particularly pernicious for two reasons. On one side, it does not adequately capture the full scope of the problem which goes well beyond fake news reporting and includes a wide array of:

  • methods – some as simple as mixing factual information with false facts, others involving a more comprehensive manipulation through fake social media accounts, impersonation, account takeovers, and organized “trolling”,
  • type of false content – from false news reporting to fabricated videos or images (colloquially known as “deep fakes”) – and,
  • forms of dissemination – which includes paid and targeted advertising and more organic forms, related to users’ interactions (sharing, commenting, etc).

On the other side, the term has been appropriated by political actors to refer to media coverage with which they disagree or want to tarnish and is now heavily associated with partisan perceptions and poor journalism[1]. Even abstracting from this discussion, the concept of “fake news” is far from finding universal acceptance in the literature. The 2018 JRC working paper on disinformation and “fake news” reviews several notions of  disinformation and “fake news”, arriving at two main definitions for “fake news”: one anchored on verifiably false information (which can be detected through fact checking and allows for identification of its source), and another broader definition which includes “deliberate attempts at disinformation and distortion of news (…), the use of filtered versions to promote ideologies, confuse, sow discontent and create polarization”.

The European Commission seems to have opted for the latter concept and has, until recently, sticked closely to it, though avoiding the use of the term “fake news”. In its 2018 Communication “Tackling online disinformation: a European approach”, the Commission defines disinformation as “verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm. Public harm comprises threats to democratic political and policy-making processes as well as public goods such as the protection of EU citizens’ health, the environment or security. Disinformation does not include reporting errors, satire and parody, or clearly identified partisan news and commentary”[2]. While referring to verifiably false information, the Commission then includes misleading information, aligning with the broader definition identified in the JRC Report. As such, it results that, for verifiably false or misleading content to be considered disinformation:

  1. Its creation, presentation, or dissemination must be intentional;
  2. The intended result envisioned by those activities must be either economic gain or deceiving the public; and
  3. Its dissemination may cause public harm[3].

This particular concept of disinformation was incorporated both in the Code of Practice on Disinformation (“COP”) and the Action Plan against Disinformation, but it is not without critique. The inclusion of verifiably false information, which would contribute to increasing certainty on the type of information that is considered false, is made innocuous as it is immediately paired with misleading. While it holds true that this is a concept that is intended to be flexible in response to an evolving threat, the fact remains that the term “misleading”, in its inaccuracy, lends itself to interpretation. While verifiably false information is that which can be proven false through careful investigation and by being juxtaposed with documented facts relating to a specific situation or piece of content, the concept of misleading, on the contrary, is so wide as to accommodate those scenarios where (purposefully or not) information is omitted or framed in a specific context that leads the reader to a conclusion that does not find an exact correlation with reality (either textually or through the addition of unrelated or biased visual elements).

Given the impact that measures in this regard, even non-regulatory ones, might have on freedom of expression and information, a more stringent approach to the definition that is at the basis of an entire EU policy body would be preferable. Especially considering that, within the exercise of said freedom, measures targeting misleading information would necessarily need to be of a different severity than those targeting verifiably false content. This is not only due to the nature of the content itself but also due to the fact that evidencing a link between misleading information and public harm is likely to be a much demanding task than with verifiably false content. This is particularly important in the context of the COP, which tasks private entities with monitoring and acting upon disinformation, entrusting them with the decision on whether specific content is false or misleading, susceptible of causing public harm and the adequate measures to address the threat.

James Pamment, in the first of a series of reports commissioned by the European External Action Service, criticizes the way by which a wide range of intents, methods, and actors were condensed into a single definition, noting that this may create unintended hinders to the policy responses. To address the issue, the author distinguishes between four concepts as opposed to a single definition for disinformation, considering that the different concepts are fitter to address concerns with fundamental rights, as well as reflecting institutional ownership over the policy responses, hence, contributing to a more tailored and adequate approach to the specific issues that each notion carries.

The author first considers misinformation as the “distribution of verifiably false content without an intent to mislead or cause harm”, extricating it from disinformation, which is, in turn, defined as “the creation, presentation, and dissemination of verifiably false content for economic gain or to intentionally deceive the public, which may cause public harm”. The author introduces, finally, two additional concepts: influence operations, defined as “coordinated efforts to influence a target audience using a range of illegitimate and deceptive means, in support of the objectives of an adversary”, and foreign interference, understood as “coercive, deceptive, and/or nontransparent efforts by a foreign state actor or its agents to disrupt the free formation and expression of political will, during elections, for example”.

In its Communication on the European Democracy Action Plan (“EDAP”), the Commission adopted new terminology that is consistent with James Pamment’s report, with the important caveat that misleading content is still included in the definitions of both misinformation and disinformation – while the expression “verifiably” is removed[4]. The adoption of these concepts has also been reflected in the Commission’s recently issued Guidance on Strengthening the Code of Practice on Disinformation, though the extent to which it will be included in the final revised COP is not clear, as, while acknowledging the concepts and the need for tailored approaches to each, the Guidance then goes on to refer to disinformation in a general sense. Inclusion in the revised COP would also depend on the signatories’ willingness to adapt the wording on that point.

The recognition that different threats require different classifications and different countermeasures is a step in the direction of a more adequate definition for the phenomenon. Notwithstanding that, some of the frailties we have highlighted remain unanswered: the inclusion of misleading in the concepts of disinformation and misinformation, as well as the unexplained removal of verifiably (false content) from both concepts. In addition, the new terminology has not been promptly adopted by all EU institutions and organisms – an example of the persisting inconsistencies in this regard is Article 5 of the annex to the recently announced Lisbon Declaration on Digital Rights, which fails to accurately reflect the definitions inserted in the EDAP[5].

While, so far, at EU level, the concept of disinformation has only been included in soft law instruments, the same is not true at Member State level. The Portuguese Charter of Human Rights in the Digital Era which, in its now infamous Article 6, relies on the original definition reached by the 2018 Communication on “Tackling disinformation: a European approach”, closely following its wording (not distinguishing between different categories and including misleading information in the concept). This particular Article was at the heart of intense public debate, following the Charter’s approval by the Portuguese legislature, though most of the criticism fell not on the definition of disinformation but rather on a different (ambiguous) provision which encourages the creation of entities, endowed with public utility status, that assign quality seals to media outlets. Considering that it is up to the Government to attribute the status of public utility to an entity[6], the concerns focused on the potential for (at least a perceived) “State-ownership over the truth”. The President of the Portuguese Republic, in light of the concerns that were raised and the potential impact on fundamental rights, asked the Constitutional Court to review the constitutionality of the norm. The President’s requisition for a review of constitutionality highlights the concerns with the vagueness of the concepts used to define disinformation, namely with the wording “verifiably false or misleading narrative”. The review is still pending with the Portuguese Constitutional Court and it will be one of the first tests of endurance for the EU’s concept of disinformation.

On the EU side, recent developments may make the issue of reaching a uniform definition that is compatible with fundamental rights more pressing. The Commission’s proposal for a Digital Services Act (“DSA”) makes several references to disinformation in its recitals, without introducing any definition for the concept. In the absence of a definition in the regulation, it is expected that the preceding communications on the issue will be used to interpret those recitals. While recitals do not have legally binding force, they are important elements to interpret the operative provisions contained in EU law instruments as they provide the reasoning for and the objective to be achieve by the latter. If the DSA is approved as is, the EU concept of disinformation, that has so far been included exclusively in non-regulatory policy instruments, will gain renewed significance for EU law. The discussions now being had at national/constitutional level might also find a new stage at the Court of Justice of the European Union.

From this analysis it is clear that the EU’s efforts to reach a definition that adequately captures the phenomenon have been, at best, uneven and uncoordinated among institutions and stakeholders (with the Commission taking the lead in putting forth the concept), at worst, it might endanger the enjoyment to the full extent of EU citizen’s fundamental rights, namely the right to freedom of expression and information. Notwithstanding the commendable (and unprecedented) steps that the EU has taken to try to tackle the issue, the fundamental rights issues at stake require that attention be drawn to the concept that serves as the basis for the EU’s policy response. Solutions such as removing misleading information from the definitions or, at least, incorporating it in a category of its own, should be considered. The same must be said for verifiably false information, a concept that itself seems to require further densification. This is particularly important as online platforms increasingly rely on these concepts (for instance, through the COP) and are tasked with policing and sanctioning speech online. While it seems that, for the moment, there will be no direct regulatory push to address disinformation, the fact remains that the concept is starting to creep into legislative instruments, making the matter ever more pressing. To achieve meaningful results in combating disinformation, a common language must be reached, through dialogue with a wide range of stakeholders, based on the experience and evidence gained so far, and with the main objective of ensuring the protection of fundamental rights in the EU.


[1] HELG, A multi-dimensional approach to disinformation: Report of the independent High Level group on fake news and online disinformation, (Luxembourg: Publications Office of the European Union, 2018), 10, accessed July 24, 2021, https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=50271.

[2] COM(2018) 236 final, 3-4. Besides excluding reporting errors, satire and parody, and clearly identified partisan news and commentary (a concept that, in itself, raises some questions), the Commission also excludes illegal content from its scope (such as hate speech), as it is otherwise covered by EU or national legislation.

[3] Implicit in this definition is the exclusion of misinformation, which amounts to a similar practice but where there is lack of knowledge of the falseness or misleadingness of the information, as well as the intent to achieve economic gain or deceive the public.

[4] Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, On the European democracy action plan (COM(2020) 790 final), 17-18, accessed July 24, 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0790&from=EN. The definitions read as follows:

  • misinformation is false or misleading content shared without harmful intent though the effects can still be harmful, e.g. when people share false information with friends and family in good faith;
  • disinformation is false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm;
  • information influence operation refers to coordinated efforts by either domestic or foreign actors to influence a target audience using a range of deceptive means, including suppressing independent information sources in combination with disinformation; and
  • foreign interference in the information space, often carried out as part of a broader hybrid operation, can be understood as coercive and deceptive efforts to disrupt the free formation and expression of individuals’ political will by a foreign state actor or its agents”.

[5] The Article makes reference to the various concepts that have just been described as, simply, “disinformation”. The Article reads as follows: “Everyone should be empowered to make informed choices on the information they are exposed to and should be protected from intentional or coordinated attacks manipulating online spaces (including those conducted without human intervention through automated processes) for the dissemination of  disinformation, created for economic gain or to intentionally deceive the public and with an actual or foreseeable negative effect to democratic, political and policymaking processes as well as to citizen’s health, the environment or security”. It should be noted that, as per the press release issued by the Portuguese Presidency of the Council of the European Union, while the Declaration was signed by all 27 Member States, the annex was signed only by 17, see, “Lisbon Declaration on Digital Rights is the “kick-start” for an international charter”, accessed July 24, 2021, https://www.2021portugal.eu/en/news/lisbon-declaration-on-digital-rights-is-the-kick-start-for-an-international-charter/.

[6] Article 16 of the law governing the status of public utility grants the power to attribute, renew or revoke this status to the Prime Minister or those in whom that power is delegated by the Head of Government. See Lei n.º 27/2021, accessed September 1, 2021, https://dre.pt/home/-/dre/163442504/details/maximized.

Picture credits: Wokandapix

One thought on “Defining disinformation in the EU: a matter beyond linguistics

  1. Pingback: A trial run for the EU’s co-regulatory approach: the Strengthened Code of Practice on Disinformation – Official Blog of UNIO

Leave a comment