Artificial intelligence has become the most important part of our daily lives that it’s hard to avoid even if we might not recognise it. Like ChatGPT and the use of algorithms in social media get lots of attention, an important area where AI promises to have an impact is the law.
The idea of AI deciding guilt in legal proceedings may seem far-fetched, but it’s one we now need to give serious consideration to.
That’s because it raises questions about the compatibility of AI with conducting fair trials.
EU has enact legislation design to govern how AI can and can’t be use in criminal law.
In North America, algorithms design to support fair trials are already in use.
These include Compas, the Public Safety Assessment (PSA) and the Pre-Trial Risk Assessment Instrument (PTRA).
In November 2022, the House of Lords publish a report which consider the use of AI technologies in the UK criminal justice system.
Supportive algorithms would be fascinating to see how AI can significantly facilitate justice in the long term, such as reducing costs in court services or handling judicial proceedings for minor offences.
AI systems can avoid the typical fallacies of human psychology and can be subject to rigorous controls.
For some, they might even be more impartial than human judges.
Also, algorithms can generate data to help lawyers identify precedents in case law, come up with ways of streamlining judicial procedures, and support judges.
Repetitive automated decisions from algorithms could lead to a lack of creativity in the interpretation of the law, which could result slowdown or halt development in the legal system.
The AI tools design to be use in a trial must comply with a number of European legal instruments, which set out standards for the respect of human rights.
These include the Procedural European Commission for the Efficiency of Justice, the European Ethical Charter on the use of Artificial Intelligence in Judicial Systems and Their Environment (2018), and other legislation enact in past years to shape an effective framework on the use and limits of AI in criminal justice.
So, we also need efficient mechanisms for oversight, such as human judges and committees.
Controlling and governing AI is challenging and encompasses different fields of law, such as data protection law, consumer protection law, and competition law, as well as many other domains like labour law.
As, decisions taken by machines are directly subject to the GDPR, the General Data Protection Regulation, including the core requirement for fairness and accountability.
There are provisions in GDPR to prevent people from being subject solely to automated decisions, without human intervention.
And there has discussion about this principle in other areas of law.
The issue is already with us, in the US, “risk-assessment” tools have use to assist pre-trial assessments that determine whether a defendant should be release on bail or held pending the trial.
In 2017, a man from Wisconsin was sentence to six years in prison in a judgment based in part on his Compas score.
The private company that owns Compas considers its algorithm to be a trade secret.
Neither the courts nor the defendants are therefore allow to examine the mathematical formula use.
Towards Societal changes
As the law is consider a human science, it is relevant that AI tools help judges and legal practitioners rather than replace them.
As in modern democracies, justice follows the separation of powers.
This is the principle whereby state institutions such as the legislature, which makes the law, and the judiciary, the system of courts that apply the law, are clearly divided.
This is design to safeguard civil liberties and guard against tyranny.
The use of AI for trial decisions could shake the balance of power between the legislature and the judiciary by challenging human laws and the decision-making process.
So, AI could lead to a change in our values.
And since all kinds of personal data can be use to analyse, forecast and influence human actions, the use of AI could redefine what is consider wrong and right behaviour, perhaps with no nuances.
It’s also easy to imagine how AI will become a collective intelligence.
Collective AI has quietly appear in the field of robotics.
Drones can communicate with each other to fly in formation.
In the future, we could imagine more and more machines communicating with each other to accomplish all kinds of tasks.
The creation of an algorithm for the impartiality of justice could signify that we consider an algorithm more capable than a human judge.
We may even be prepare to trust this tool with the fate of our own lives.
Maybe some day, we will evolve into a society similar to that depict in the science fiction novel series
A world where key decisions are delegated to new technology strikes fear into many people, perhaps because they worry that it could erase what fundamentally makes us human.
Yet, at the same time, AI is a powerful potential tool for making our daily lives easier.
In human reasoning, intelligence does not represent a state of perfection or infallible logic.
Like errors play an important role in human behaviour.
They allow us to evolve towards concrete solutions that help us improve what we do.
If we wish to extend the use of AI in our daily lives, it would be wise to continue applying human reasoning to govern it.
Rahul Ram Dwivedi (RRD) is a senior journalist in 2YoDoINDIA.
NOTE : Views expressed are personal.