Summary: 1. Introduction. – 2. The Rule of Law in Times of Digital Technologies. – 3. Judicial Independence under the Influence of Digitalisation. – 4. The Algorithmisation of Decision-making. – 5. Conclusions.
The rule of law is one of the fundamental pillars, along with human rights and democracy, which are affected by digitalisation today. Digital technologies used for the victory of populism, the manipulation of opinions, attacks on the independence of judges, and the general instrumentalisation of the law contribute significantly to the onset of negative consequences for the rule of law. Particularly dangerous are the far-reaching consequences of the algorithmisation of decision-making, including judicial decisions.
The theoretical line of this research is based on the axiological method since the rule of law, democracy, and human rights are not only the foundations of legal order, but also values recognised in many societies and supported at the individual level. The study also relied on the phenomenological method in terms of assessing the experience of being influenced by digital technologies in public and private life. The practical line of research is based on the analysis of cases of the European Court of Human Rights and the Court of Justice to illustrate the changes in jurisprudence influenced by digitalisation.
This article argues that the potential weakening of the rule of law could be related to the impact of certain technologies itself, and to their impact on certain values and foundations which is significantly aggravated.
Judicial independence is affected since the judges are involved in digital interactions and are influenced by technologies along personal and public lines. That technologies often belong private sector but are perceived as neutral and infallible, which is highly predictive of court decisions. This leads to a distortion of the essence of legal certainty and a shift of trust from the courts to certain technologies and their creators.
The possibility of algorithmic decision-making raises the question of whether the results will be fairer, or at least as fair, as those handed down by human judges. This entails two problems, the first of which is related to the task of interpreting the law and the second of which involves the need to explain decisions. Algorithms, often perceived as reliable, are not really capable of interpreting the law, and their ability to provide proper explanations for decisions or understand context and social practices is questionable. Even partial reliance on algorithms should be limited, given the growing inability to draw a line between the human and algorithmic roles in decision-making and determine who should be responsible for the decision and to what extent.