Jailtime for using AI – taking privacy a tad too far

AI has an unthinkably high number of potential uses: machine learning technologies are used in email and spam filtering, for automated customer support, even to diagnose serious illness. It can also be used for what is often called “judicial analytics”, which is the use of statistics and machine learning to understand or predict judicial behaviour. So basically, by using AI, one could successfully predict how the court would rule in a given case. Not in France though, apparently: carrying out such work there can lead up to five years in prison.

The French Code of Administrative Justice, recently amended by the Article 33 of the Justice Reform Act states:

‘The identity data of magistrates and members of the judiciary cannot be reused with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices.’

The EU has always taken a more cautious approach towards the new technologies, mainly from a data protection perspective. We assume, we do not have to introduce GDPR, which scared the world at first, but the aim of which is rather to facilitate the comprehensive data protection mindset and to provide a set of standardised data protection laws across all the member countries of the EU. Therefore it shall make it easier for EU citizens to understand how their data is being used, and also how to raise complaints, even if they are not in the same country where the data controller or their data are located. The EU has also developed a set of guiding principles for ethical considerations, such as non-discrimination and transparency in AI research and development.

The cautious approach is understandable, considering, for one, the real-life example of China, the country that has embraced pervasive facial recognition technology, which makes the rest of the world uneasy about the potential uses of AI and big data. However, taking the cautiousness too far can also cause harm.

The result of this legislation mainly is that the French will have less information on the way their own judicial system works, people will have access to fewer tools to help them and it might even reduce access to justice at a later point.

Supporters of the new amendment claim that there is no real harm, since analytics can still be carried out without using judge information. From the text of the law, this is admittedly true, however, conducting serious and detailed analysis or prediction of judicial behaviour requires taking into account the individual differences among judges, which can be substantial. This essentially means that the analysis without the judges’ information is vastly inaccurate and most likely completely pointless.

Another consequence of the legislation, from a less human rights and more economic point of view, that with the ban and the serious criminal penalty, startups and researchers will likely to leave this field behind and turn to areas where they won’t need to be watching their every step and worry about accidentally crossing over to dangerous territory and carry out analysis that may land them in jail for a long time. For what? The protection of the judge’s’ personal data? Information that is publicly available? Not to mention that bigger companies might chose different jurisdictions in the course of their commercial activities where analyses are possible so that they can go to court more efficient, which will mean a significant financial burden to the economically weaker parties. The result of these potential outcomes, with being even less information available, can cause harm to the general public.

Various reasons for the amendment have been shared and brought to surface, starting from the general need for anonymity. It seems that this is the most supported thought of reasoning amongst those in favour of the new law: the protection of the privacy of the judges. Basically, the judges did not want the pattern of their decisions – now relatively easy to model – to be potentially open for all to see.

The possible reason can also be a fear of economic nature: public access to this processed data may reduce the need for lawyers. And of course, there is the fear of bias: that the human-generated data used to train machine learning algorithms can be easily tainted by racism, sexism, or other biases. It may even be that AI could deepen or create wealth inequalities in the legal system. Already, access to legal services – and access to justice – in a lot of cases is dependent upon the ability to pay: money buys better, higher-quality legal representation. With only the rich being able to buy the latest software, AI could strengthen this phenomenon.

While this last one might seem like a justifiable reason at first glance, a ban on judicial analysis is most likely not the solution for the problem, as it eliminates not only the bad but all the good as well.

First and foremost, it is most likely illegal: both the French Constitution and the European Convention on Human Rights protect freedom of speech, as a fundamental human right, which this new law restricts. Since it is not an absolute right, freedom of speech can be restricted, but subject to numerous conditions, which in this case are most likely not present.

Secondly: it is also particularly illogical. The cases, which such analysis would use, are publicly available information. If it is already in the public domain, anyone should have the right to analyse given data and show or reveal the outcome of such analysis.

Lastly, from a practical perspective, if used well, AI tools could actually help expand access to courts, justice and legal advice, which are now too often only available to the wealthy.

All in all, the distressing fact is this: a government and its justice system have decided to criminalize revealing information on how judges think of certain legal issues in statistical and comparative analysis. Let’s hope the rest of the world will not follow.

Exciting developments
are underway at Kassailaw!
Our team of legal and technology experts is hard at work, preparing to launch a new and innovative way to access information and knowledge. This interactive platform will provide an immersive and engaging experience and we’re eager to share it with you.
Stay tuned!
🔥💡💻

“Is your team the dream team? How much percentage should each founder get?” One of the core ingredients to success is the right team with complementing skills and personalities: early stage investors (and business partners too, by the way) will invest in the team, not the idea. Our goal is to guide you in building a strong and well-functioning team, as well as help you uncover potential friction points or weaknesses in the team, so that you can address them in the very beginning. When it comes to the fair split with your co-founders, if you need a reference point, or just want reassurance, we have developed our own tool for equity split calculation. Hint: the one answer that’s certainly wrong is a hasty 50-50 split.

You have spotted a problem and found a viable solution – in other words, you have your idea. What’s the next step? You need to make sure that the problem your business is trying to solve is a valid problem for a wide enough group, and that

Are you sure that the problem your business is trying to solve is a valid problem for a wide enough group? 

When you spot a problem and think you have found a viable solution to create a business around, it’s all too easy to get excited and jump straight into ideating a solution.

Avoid making something and then hoping people buy it when you could research what people need and then make that.

It doesn’t make any sense to make a key and then run around looking for a lock to open.

There are many ingredients in the recipe for creating a successful startup, but most certainly whatever you read and wherever you go, one of the first pieces of advice is going to be to do your homework properly regarding the validation. You have to validate both your problem and your solution to be able to define the perfect problem-solution and later on the product-market fit. If you manipulate your future customers into liking your solution or do not reveal all the aspects and layers of a problem you identified, your idea can easily lose its ground and with that the probability of it surviving and actually being turned into a prosperous business. Let us know if we can help at this initial but yet super-important stage.

Validation is the first step in moving towards learning more about the problem you are ultimately looking to solve.

Finding your unique value proposition is only possible if you take a thorough glance at your competitors. The world of tech is highly competitive, particularly so when you operate in a field with low entry barriers, you need to carefully examine and regularly update the news and developments of those companies who act in the same field and market. This might lead to several pivots for you if necessary, because you can significantly increase your chances of success if you can offer a—at least in some aspect—unique solution to your customers. The introduction as “we are like Uber/Snapchat/WeWork/Spotify, only better” is hardly sufficient in most cases. Unless you really are so much better, but then you need to know that too, so up the competitive analysis.