Would you rather be sentenced by a human or a machine?
Artificial Intelligence (AI) is everywhere and in the future its impact on human life will only increase. What role can AI have in sentencing criminal cases? Can AI replace human judges in meting out punishment?
Human judges have their flaws, for they are only human. In the field of criminal justice, research has shown that sentencing outcomes are often unprincipled and inconsistent. Sentencing disparity exists between jurisdictions (the one court punishes more severely than the other), between judges (judge A punishes more severely than judge B) and within judges (a judge punishes black men more severely than white women, or becomes more punitive when he is hungry).
Some believe computers can help address the issue of unwarranted sentencing disparity. Computers are never cranky, tired or hungry. If they are programmed well, they can produce very consistent sentences, there’s no doubt about that. But are these very consistent sentences ‘good’, principled sentences? And how about self-learning AI, which does not follow if/then rules, but ‘thinks’ or rather processes information on its own? Would such a robot judge be better at sentencing than human judges?
Let’s try to assess this by using four different criteria that might be useful when thinking about just sentencing.
Insight into how the judge determined the punishment is important for the legitimacy of sentencing. Simplistic ‘if/then’ algorithms are very transparent: it’s easy to see which factors were taken into account and how they contributed to the sentencing outcome. However, more sophisticated, extensive coding makes it difficult to see the forest for the trees. And with self-learning AI, the algorithms might become so complex that it’s no longer understandable for humans: there’s no transparency anymore in how the sentencing decision has been reached: we’d have to trust the black box of the robot judge. Advances are being made in AI technology in this respect, but, for now, this assessment seems accurate.
As stated above, robot judges could be really good at achieving consistency in sentencing: cases with similar characteristics would receive similar punishment. Robot judges, after all, are not prone to cognitive biases or subjective assessments like human judges. But there’s a pitfall: robot judges use the data from prior cases. And these cases were tried by human judges and were thus infected with the bias of the human judges. Robot judges replicate these biases: biases present in the verdicts of the judges become embedded in the algorithm, reinforcing existing inequality and stereotypes.
Human judges are no better
So, robot judges will sentence consistently, but in a way that is opaque and reinforces existing unwarranted sentencing disparity. However, human judges are no better: their sentences suffer from both: inconsistency and lack of transparency in their reasoning. Of course, their verdict includes their reasoning for the punishment, but these explanations only refer to circumstances that the judges took into account consciously. The factors that may affect the sentencing decision unconsciously remain unknown. The judge’s brain is in fact also a black box.
Yet, despite the flaws of the human judges, many people prefer being sentenced by a human judge to being sentenced by a robot judge. Algorithm aversion plays a role here. And algorithm aversion is even greater when symbolic values and beliefs are at stake. Since sentencing is all about moral decision-making, people are reluctant to put their faith in the mechanical hands of the robot judge: current AI is incapable of making moral decisions since it doesn’t understand morality. Humans, thus, do not perceive them as equal decision-makers.
In addition, sentencing is not only about the judge making moral decisions, but also about a moral message being communicated to the offender – and to the victim and society as well. Human judges are able to communicate effectively with participants and the public, and engage with them as one moral agent to another. For example, showing empathy towards the offender or the victim. While some human judges are better at it than others, AI judges may never achieve it at all.
So if we leave sentencing to AI, are we not losing the very essence of sentencing?
If you would like to read more about this, read our chapter ‘Artificial Intelligence: Humans against machines’ in the book ‘Sentencing and Artificial Intelligence’, edited by Jesper Ryberg and Julian Roberts.
Important issue, no doubt.
But it is much more complicated than that, for too many reasons it is so. Just two points right now:
First, appealing is a fundamental right in every judiciary or state. How to appeal then ? Would one create a robot, more advanced and sophisticated than the lower tribunal suppose ? Or maybe no appeal at all ?
Second, there is immunity or not to judges or partly so (depends upon the system or state). Now such robot would be a legal person ? Would he bear legal personality ? Otherwise, if one judge acts or has acted maliciously, he should face lawsuit or trial or whatever. But, a robot can become legal person ? Or otherwise, he wouldn't have to face any accountability whatsoever ?
P.S: There is no such bias as the respectable author of the post presents here. It does depend upon clear guiding legislation. Anyway, such bias is meant to be eliminated by or through appeal. Then the Q is back to the issue then. How to appeal while robots prevail ?
Add a comment