By Miguel Pereira (Master in European Union Law from the School of Law of the University of Minho)
▪
On the 16 June 2022 the Strengthened Code of Practice on Disinformation was signed and presented to the European Commission, marking the end of a year long process that revamped the original 2018 Code of Practice on Disinformation.
The Strengthened Code, following the lines of the 2018 Code, is a self-regulatory and voluntary mechanism by which participants of the digital economy assume commitments to combat disinformation online. It forms part of a wider strategy that has been developed by the EU institutions since 2018 but has assumed a central role in the EU’s response to phenomenon. The 2018 Code was particularly important to highlight the mechanisms that online platforms had developed (and could develop) to address the issues this threat posed to their services and allowed for closer cooperation between its signatories and the Commission, with special focus around two events: the 2019 European Parliament election and the Covid-19 crisis.
Notwithstanding the successes we have highlighted and the groundbreaking nature of the initiative, a 2020 assessment of the implementation of the code levied criticism at the lack of oversight, erratic reporting practices, vagueness of the commitments, relatively disappointing adherence by industry players and difficulty in evaluating its effectiveness and enforcing the commitments vis-á-vis its signatories. Based on this assessment, the Commission issued a guidance calling for a strengthening of the Code’s structure and commitments and laying out specific areas which merited improvement. The signatories heeded the call and led the review process, with the resulting Strengthened Code closely following the recommendations laid out in the Commission’s Guidance.
Before delving into an overview of the changes introduced by the Strengthened Code, a few general considerations are in order. First and foremost, the adoption of a new definition for disinformation, in line with the one put forth in the European Democracy Action Plan (“EDAP”), which points to a welcome stabilization of the concept – despite the issues that this (more detailed) definition still presents and which we have discussed in a previous blogpost.
Another mark of the Strengthened Code is the affirmation of the alignment with the Digital Services Act (“DSA”) and a commitment to a co-regulatory approach which the Commission favored in its Guidance. This is made evident not only through the mimicking of certain provisions contained in the proposed DSA and the express reference to articles of the regulation, but also in the differentiated treatment the Strengthened Code confers to different signatories, considering their size, resources and risk profile. The commitments and measures the signatories sign up for should be relevant to the services they provide and apply only within the European Economic Area. As regards timelines, signatories are expected to implement the commitments and measures they signed up to within six months of the signature of the Strengthened Code and to report on their implementation by the following month. Finally, we should note the increase in signatories, with the Strengthened Code now counting with 34 signatories, ranging from online platforms and industry associations, to civil society and fact-checking organizations.[1]
1. Commitments and measures
The Strengthened Code maintains the pillar structure introduced in the 2018 Code, although expanding it. The Code now counts with six pillars/areas of intervention: Scrutiny of Ad Placements, Political Advertising, Integrity of Services, Empowering Users, Empowering the Research Community and (the novelty in the Strengthened Code) Empowering the Fact-Checking Community. These pillars are densified in commitments that state the aims to be pursued in the context of the Code and were, in the past, criticized for lack of clarity in language and in the setting of specific objectives. To overcome this difficulty, the signatories not only opted for language that establishes concrete goals but expanded the number of commitments from 21 to 44 and further detailed them with measures that establish concrete actions and, where possible, timelines to complete them (a total of 128 specific measures were introduced).
As regards Scrutiny of Ad Placements, emphasis is placed in demonetizing disinformation by limiting the posting of ads in pages that are consistent purveyors of disinformation, shifting part of the responsibility to the ad placers that should be able to track the pages in which their ads (and those of their clients) are placed, and insisting in the implementation of brand safety tools by online platforms to allow ad placers greater control and oversight over the placement of their ads. This commitment is complemented by two others that look, on one side, to increase control over the content of ads in order to prevent the distribution of disinformation through commercial messages and, on the other, by expanding the cooperation with all the players in the advertising ecosystem, fostering greater exchange of information and integration of fact-checking.
As for Political Advertising, the signatories commit to reaching a common understanding of the concept, as well as issue-based advertising, an objective which was already included in the 2018 Code but was not fulfilled. To address the previous Code’s shortcomings, the signatories commit to adhere to the definition of political advertising set forth in the Commission’s proposal for a Regulation on the transparency of political advertising and to agree on a definition for issue-based advertising, in case the final text of the Regulation fails to include such a definition. Considering the proposal does not address that issue, it is likely the signatories will have to develop their own definition – a scenario which is not ideal, considering the closer link such type of advertising has to fundamental rights, a legislative solution reached by bodies that have democratic legitimacy would be preferable.
Still in the context of political advertising, signatories commit to ensuring efficient, intelligible and transparent labelling of these ads, allowing users to be aware of identity of the financer of the ad as well as of its political nature and the parameters used to target the user. In an effort to expand the methodology used for the 2019 European Parliament election and several national elections, signatories are to register these ads in searchable repositories, along with the associated details.
The third pillar, Integrity of Services, is focused on reaching a common understanding of impermissible manipulative behavior – the lack of which caused issues in the 2018 Code, particularly in the context of reporting. The signatories, while agreeing to further develop knowledge and cooperation in this area (namely through the creation of channels for information exchange between platforms), have settled, for the moment, on those described by AMITT Disinformation, Tactics, Techniques and Procedures Framework. A novelty in this context is the inclusion of a commitment to address the use of AI systems for disinformation purposes, namely through deep fakes.
User Empowerment seems to be the pillar which the signatories expanded the most, focusing efforts on increasing media literacy initiatives, adequate flagging of false, misleading and fact-checked content, as well as increasing the visibility of content from authoritative sources. Additionally, great emphasis is placed on the safe design and development of system architecture that curbs the spread of disinformation. In line with the DSA, online platforms are also expected to grant users greater control over their interaction with content, through the creation of alternative recommender systems for users to choose from and ensuring the possibility to flag false/misleading content, as well as the associated redress mechanisms.
Something to note is the inclusion of commitments specifically targeting messaging apps, which were absent from the 2018 Code, namely through the exploration of possibilities for content labels to be visible even when communicated through these apps and through cooperation with fact-checkers.[2]
Regarding Empowering the Research Community, subject of some of the bluntest criticism in the 2020 assessment (described as “episodic and arbitrary” and following a closed approach, at the absolute discretion of the signatories), the tone as shifted to an open-access policy regarding non-personal data, via the development of well-equipped APIs. The Strengthened Code also looks to establish a framework for researchers to access personal data, via the creation of and independent body tasked with vetting researchers and research projects.
Finally, the newly added pillar focusing on Empowering the Fact-Checking Community, seeks to establish a framework for cooperation between platforms and fact-checking organizations, inclusive of adequate remuneration for the services provided by the latter, and pushing platforms to integrate their work in their services, ensuring full coverage of all EU languages and Member States. This pillar addresses commitments to the fact-checkers themselves which are expected to compile fact-checked materials in a common database and to adopt and adhere to internationally recognized standards – with a future European code governing the activity showcased as one of the possibilities.
2. Transparency, reporting and monitoring
Within 6 months of signing the code, signatories are expected to set up a common Transparency Center where they register and make publicly available all the information related to the implementation of the code, through agreed upon reporting metrics which should be easily searchable and understandable by the public (including tracking of changes to internal policies. This Transparency Center is to be funded by the signatories, in proportion to their risk profile and resources.
One of the most considerable changes is the complete overhaul of the reporting and monitoring structure of the Code, pointed in 2020 as one of its biggest weaknesses. The Commission in its Guidance asked signatories to consider introducing two distinct sets of Key Performance Indicators, Service Level Indicators (“SLI”) which should translate to quantitative and qualitative reporting on the implementation of each commitment and reported on by each individual signatory, and Structural Indicators, that should assess the overall effectiveness of the Strengthened Code in addressing the disinformation threat as a whole.
While signatories were able to reach agreements on the SLIs, which now accompany each commitment and are split between Qualitative Reporting Elements (reporting on measures and policies adopted and how they relate to the fulfilment of the commitments) and SLIs (meant to provide quantitative information on actions taken to meet the Code’s obligations),[3] no Structural Indicator is put forth in the Strengthened Code. Notwithstanding that, the signatories have committed to develop these indicators within nine months of signing the Code and to publish an initial measurement with their first full report.
These Structural Indicators are to be agreed upon within the context of the Permanent Task-Force, a body created to monitor and update the Code, whose participants are the Signatories, the European Regulators Group for Audiovisual Media Services, the European Digital Media Observatory, the European External Action Service and the Commission, which should chair its meetings – whose plenary should meet at least every six months.
Finally, we should note an attempt to overcome one of the main difficulties of the 2018 Code: the absence of a common reporting format. To this end, not only have signatories agreed to reach common definitions, as we have already noted, but to come up with common reporting templates that should facilitate data analysis for monitoring and research processes. The Signatories have also committed to granting greater access to data to the Commission in times of crisis and, in anticipation of the DSA, Very Large Online Platforms (“VLOPs”) are expected to submit themselves to independent audits regarding the commitments assumed under the Code.
3. Final comments
From this quick overview of the main features of the Strengthened Code we can extract some general undertones that are embedded in the Code’s structure. There seems to be a clear sense of experimentalism and focus on information collection with a view to develop the Code as opposed to the creation of a rigid, immutable structure. This is further confirmed by the introduction of the Permanent Task Force whose main purpose is indeed monitoring and updating the Code’s commitments and highlight best practices the signatories can implement. This is complemented by a focus on transparency and accountability in relation to the public.
A notable absence in the Code is the expulsion mechanism that was present in the 2018 Code. While signatories may still voluntarily withdraw from the Code, the remaining signatories can no longer expel them. The difficulties in enforcing the 2018 Code were quite clearly pointed out by the Commission in its guidance and assessment. It seems that, going forward, only through the interaction with the DSA, which accounts for the creation and participation in codes of conduct and considers them as risk mitigation measures (Article 27), will there be some form of enforcement – this is assuming that the Commission will not only consider participation but also effective compliance with said codes of conduct as a mitigating factor. This enforcement will, however, likely be limited to VLOPs as no provision of this kind has been inscribed in the DSA proposal regarding other digital service providers and the Strengthened Code does not account for its extension to other signatories.
The focus in deepening cooperation among signatories and between them and third parties is also quite clear, not only through mechanisms such as the Permanent Task Force and Transparency Center, but also through the commitments instigating work with researchers and fact-checkers, which have now gained signatory status.
While Commitments such as those that seek to standardize the information to be reported and the format under which it should be reported are one of the most notable improvements of the Code, doubts still remain about how to actually measure the effects of specific actions (the difficulty in measuring the impact of platforms’ action on user behavior). Only time and careful review will be able to advise if these are adequate metrics and their usefulness in assessing the performance of the Code or combating disinformation, wholly considered.
Flexibility becomes a core value of the Strengthened Code through the creation of its own structures which can more easily adapt it to the emerging needs of the signatories and to the everchanging tactics employed by disinformation purveyors. While flexibility is key to combat the phenomenon, considering the relevance of the role that some of the signatories of the Code have assumed in our societies, transparency of the processes leading up to Codes of Conduct is something that will be key going forward – and which has not been a hallmark of this revision process, with the insights gained mostly through the Commission’s guidance while the discussions leading up to the Code were not subject to public scrutiny.
The Strengthened Code marks a clear shift in EU policy making regarding the digital economy, with a push towards the co-regulatory approach (a series of other codes of conduct are surging in the horizon, such as the Code of Conduct on access to platform data and the aforementioned Code of Professional Integrity for Independent European fact-checking organisations). While the flexibility and expert input afforded by such a close cooperation with industry players is a remarkable initiative and, possibly, essential to the effective governance of digital services, the issues with transparency of the processes and accountability of its participants must be addressed in order to avoid trapdoors that are, unfortunately, too familiar to the EU: lack of understanding of the functioning of these structures, which might result in distrust by the general public and, finally, the ever present accusations of a democratic deficit.
[1] While the Commission has announced 34 signatories, at the moment the individual subscription documents detailing the commitments each has assumed are only available for 33. See, https://digital-strategy.ec.europa.eu/en/library/signatories-2022-strengthened-code-practice-disinformation, accessed 14 July 2022.
[2] A good of example of cooperation between fact-checkers and messaging apps is the service developed by Maldita.es, now a signatory of the code. See, https://www.poynter.org/fact-checking/2021/whatsapp-can-be-a-black-box-of-misinformation-but-maldita-may-have-opened-a-window/, accessed 14 July 2022.
[3] While the Commission chose the designation of SLI for both quantitative and qualitative information, the signatories seem to have opted to keep the designation only for quantitative reporting elements.
Picture credits: Geralt.
Pingback: Stability of our society threatened by disinformation via deep fakes endangering democracy – Some View on the World