European Ethical Charter on the use of artificial intelligence in judicial systems and their environment: what are the implications of this measure?


 by Amanda Espiñeira, Master Student at University of Brasília

Artificial intelligence has become a topic of great interest for the advancement of the information society and automation. Through various themes, from art, gastronomy, the world of games, the various mechanisms that involve AI allow the expansion of human creativity and capabilities, and are very important, especially when it comes to judicial systems. A field that for a long time has remained closed to innovations and digital transformations, today it opens and allows that there is more celerity and transparency to the decisions of the legal world. In other words, AI promises to fill a gap in the area, which still has plastered processes, such as the registry offices, which are almost synonymous with bureaucracy.

However, the importance of the theme and its efficiency, debating ethical aspects in this area is extremely relevant because AI can extract insights, we could never come up using traditional data mining techniques. And is even more important in the context of recent data protection regulation, especially GDPR- General Data Protection Regulation.

Thus, the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe has adopted the first European text setting out ethical principles relating to the use of artificial intelligence (AI) in judicial systems, published on December 4, 2018[1].

The CEPEJ brings together experts from the 47 member states of the Council of Europe. According to the Council of Europe, they have the aim of improving the quality and efficiency of the European judicial systems and strengthening the court users’ confidence in such systems.

The Charter is addressed for public and private stakeholders responsible for the design and deployment of artificial intelligence tools and services that involves the processing of judicial decisions and data (machine learning or any other methods deriving from data science) and also concerns public decision-makers in charge of the legislative or regulatory framework.

That lists 5 principles to guide this use: RESPECT FOR FUNDAMENTAL RIGHTS; NON-DISCRIMINATION; QUALITY AND SECURITY: with regard to the processing of judicial decisions and data, use certified sources and intangible data with models elaborated in a multi-disciplinary manner, in a secure technological environment; TRANSPARENCY, IMPARTIALITY AND FAIRNES; and UNDER USER CONTROL: preclude a prescriptive approach and ensure that users are informed actors and in control of the choices made.

One of the relevant developments of the first principle is that in line with the idea of privacy by design, an AI technology should be based on ethical-by-design or human-rights-by-design approaches. This means that right from the design and learning phases, rules prohibiting direct or indirect violations of the fundamental values protected by the conventions are fully integrated.

Another concern about the use of AI is particular care when the processing is directly or indirectly based on “sensitive” data. This could include alleged racial or ethnic origin, socio-economic background, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric data, health-related data or data concerning sexual life or sexual orientation. When such discrimination has been identified, consideration must be given to corrective measures to limit or, if possible, neutralize these risks and as well as to awareness-raising among stakeholders. The decisions must be considered with the greatest reservations in order to prevent discrimination in conformity with the guarantees of a fair trial.

Make instruments more transparent is another worry about AI technologies. In this context of making more accountable the decisions taken by AI, IBM Global Business Services create a software service to detect bias in AI models and track the decision-making process[2]. This service should allow companies to track AI decisions as they occur and monitor any ‘biased’ actions to ensure that AI processes are in line with regulation and overall business objectives. According to Forbes interview with IBM: “by measuring IBM’s predicted decisions against the actual decisions taken by an AI program, including the weight it gives and the confidence it has on that decision, the software can theoretically figure out whether the algorithm is biased and determine the cause of that bias.”

After the principles, the Charter has 4 Appendix. The first and biggest one calls “In-depth study on the use of AI in judicial systems, notably AI applications processing judicial decisions and data”, written by researchers from different countries in Europe, bring many examples of the use of AI in Europe, like experiments conducted in France, also show pictures and graphics that represents good and bad uses of AI for judicial systems. The precautionary principle should be applied to risk assessment policies when we talk about questions relating to the protection of personal data and AI.

The second Appendix talk about the uses of AI in European judicial systems and have the goal to encourage to a different degree their application in the light of the principles and values set out in the Ethical Charter. One of the most interesting use of AI that defies bioethics and is under discussion all over the world, including in Brazil[3] is the use of algorithms in criminal matters for the profile of individuals.

In a very didactic way, the appendix three brings a glossary with concepts and categories of analysis. And finally, in order to enable an evaluation of the uses of AI and to guide the recipients of the document, we have a Checklist for integrating the Charter’s principles into your processing method.

For the CEPEJ the application of AI in the field of justice can contribute to improve the efficiency and quality and must be implemented in a responsible manner which complies with the fundamental rights guaranteed in particular the European Convention on Human Rights (ECHR) and the Council of Europe Convention on the Protection of Personal Data.

This initiative is extremely necessary to guide the use of AI in EU, since the guides and principles model are the best one to regulate regulatory issues related to Internet and new technologies. The Charter should serve as an example for other countries and regions, especially as it contains examples of good practice and evaluation tools.




Pictures credits: Hand… by TheDigitalArtist.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s