AI has an unthinkably high number of potential uses: machine learning technologies are used in email and spam filtering, for automated customer support, even to diagnose serious illness. It can also be used for what is often called “judicial analytics”, which is the use of statistics and machine learning to understand or predict judicial behaviour. So basically, by using AI, one could successfully predict how the court would rule in a given case. Not in France though, apparently: carrying out such work there can lead up to five years in prison.
The French Code of Administrative Justice, recently amended by the Article 33 of the Justice Reform Act states:
‘The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.’
The EU has always taken a more cautious approach towards the new technologies, mainly from a data protection perspective. We assume, we do not have to introduce GDPR, which scared the world at first, but the aim of which is rather to facilitate the comprehensive data protection mindset and to provide a set of standardised data protection laws across all the member countries of the EU. Therefore it shall make it easier for EU citizens to understand how their data is being used, and also how to raise complaints, even if they are not in the same country where the data controller or their data are located. The EU has also developed a set of guiding principles for ethical considerations, such as non-discrimination and transparency in AI research and development.
The cautious approach is understandable, considering, for one, the real-life example of China, the country that has embraced pervasive facial recognition technology, which makes the rest of the world uneasy about the potential uses of AI and big data. However, taking the cautiousness too far can also cause harm.
The result of this legislation mainly is that the French will have less information on the way their own judicial system works, people will have access to fewer tools to help them and it might even reduce access to justice at a later point.
Supporters of the new amendment claim that there is no real harm, since analytics can still be carried out without using judge information. From the text of the law, this is admittedly true, however, conducting serious and detailed analysis or prediction of judicial behaviour requires taking into account the individual differences among judges, which can be substantial. This essentially means that the analysis without the judges’ information is vastly inaccurate and most likely completely pointless.
Another consequence of the legislation, from a less human rights and more economic point of view, that with the ban and the serious criminal penalty, startups and researchers will likely to leave this field behind and turn to areas where they won’t need to be watching their every step and worry about accidentally crossing over to dangerous territory and carry out analysis that may land them in jail for a long time. For what? The protection of the judge’s’ personal data? Information that is publicly available? Not to mention that bigger companies might chose different jurisdictions in the course of their commercial activities where analyses are possible so that they can go to court more efficient, which will mean a significant financial burden to the economically weaker parties. The result of these potential outcomes, with being even less information available, can cause harm to the general public.
Various reasons for the amendment have been shared and brought to surface, starting from the general need for anonymity. It seems that this is the most supported thought of reasoning amongst those in favour of the new law: the protection of the privacy of the judges. Basically, the judges did not want the pattern of their decisions – now relatively easy to model – to be potentially open for all to see.
The possible reason can also be a fear of economic nature: public access to this processed data may reduce the need for lawyers. And of course, there is the fear of bias: that the human-generated data used to train machine learning algorithms can be easily tainted by racism, sexism, or other biases. It may even be that AI could deepen or create wealth inequalities in the legal system. Already, access to legal services – and access to justice – in a lot of cases is dependent upon the ability to pay: money buys better, higher-quality legal representation. With only the rich being able to buy the latest software, AI could strengthen this phenomenon.
While this last one might seem like a justifiable reason at first glance, a ban on judicial analysis is most likely not the solution for the problem, as it eliminates not only the bad but all the good as well.
First and foremost, it is most likely illegal: both the French Constitution and the European Convention on Human Rights protect freedom of speech, as a fundamental human right, which this new law restricts. Since it is not an absolute right, freedom of speech can be restricted, but subject to numerous conditions, which in this case are most likely not present.
Secondly: it is also particularly illogical. The cases, which such analysis would use, are publicly available information. If it is already in the public domain, anyone should have the right to analyse given data and show or reveal the outcome of such analysis.
Lastly, from a practical perspective, if used well, AI tools could actually help expand access to courts, justice and legal advice, which are now too often only available to the wealthy.
All in all, the distressing fact is this: a government and its justice system have decided to criminalize revealing information on how judges think of certain legal issues in statistical and comparative analysis. Let’s hope the rest of the world will not follow.