Summary: 1. Introduction. – 2. European Approach to AI in Adjudication. – 3. AI as Judicial Assistants: Supporting Human Decision-Making. – 4. AI as Autonomous Adjudicators: The Prospect of Replacing Judges. – 5. Conclusions.
over whether artificial intelligence (AI) could, or should, replace human judges in the decision-making process. Increasing attention is being paid to the possibility that AI systems may, over time, equal or surpass human judges in efficiency, consistency, and the delivery of reasoned decisions. At the same time, current developments in legal technology primarily point toward the use of AI as a tool designed to assist judicial decision-making rather than to exercise autonomous adjudicatory authority. This tension between supportive and substitutive uses of AI highlights the need for a nuanced analysis of the permissible and appropriate role of AI in adjudication.
The debate becomes even more complex in the European context, where the intersection of technology and law is guided by a commitment to upholding fundamental rights and ethical principles. The adoption of various soft law instruments, such as ethical guidelines and recommendations on AI, alongside the binding provisions of the Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (the AI Act), underscores the EU’s proactive approach to regulating AI in high-risk and sensitive domains, including the administration of justice. This dual emphasis on ethical standards and legal safeguards makes it essential to examine the European approach to AI in adjudication.
Methods: This article employs a qualitative legal methodology, drawing primarily on doctrinal, analytical, and teleological methods. The doctrinal method serves as the foundation, involving a systematic analysis of EU and Council of Europe instruments, including the European Ethical Charter on the Use of AI in Judicial Systems, the Ethics Guidelines for Trustworthy AI, and the AI Act, to identify how European law conceptualises AI in adjudication and safeguards human oversight. The teleological method is applied to interpret these instruments in light of their broader objectives, uncovering how human-centric principles and fundamental rights guide the permissible use of AI in courts. Finally, the analytical method integrates insights from these sources to develop a conceptual framework distinguishing between supportive and substitutive models of AI adjudication, thereby clarifying the normative boundaries of the European approach.
Results and conclusions: The paper concludes that the European approach to AI in adjudication is defined by a human-centric and rights-based paradigm, developed through the combined efforts of the European Union and the Council of Europe. The results show that this framework consistently positions AI as a supportive tool that enhances judicial efficiency and consistency, while ensuring that final decision-making authority remains with human judges. At the same time, the analysis recognises that this approach, though coherent and well-suited to current technological realities, may increasingly be tested as AI systems become more advanced, challenging the assumptions that underpin the current European model, built on human oversight and control. While the framework firmly excludes autonomous AI judges, future developments may prompt renewed consideration of whether its existing boundaries remain adequate to govern increasingly sophisticated technological involvement in adjudication.

