Summary: 1. Introduction. – 2. Methodology. – 3. Legal and Political Foundations in Europe. – 4. Germany: Use Cases, Regulation, and Development Pathways. – 5. Ukraine: Use Cases, Regulation, and Development Pathways. – 6. Safeguarding Rights, Evidence, and Fair Trial Standards. – 7. Conclusion.
Background: Artificial intelligence (AI) is rapidly evolving from peripheral administrative tools into applications that directly influence the functioning of criminal justice systems. In Europe, this integration proceeds under a cautious, law-centred approach that seeks to balance innovation with the preservation of judicial independence, fairness, and the rule of law. This article offers a comparative legal analysis of AI deployment in the criminal justice systems of Germany and Ukraine, situating national developments within the broader framework of the EU Artificial Intelligence Act (2024), Council of Europe standards, and constitutional safeguards. Germany’s structured, federally coordinated rollout contrasts with Ukraine’s targeted yet ethically constrained implementation, reflecting divergent institutional capacities and legal traditions.
Methods: This study adopts a comparative legal approach that combines functionalist and contextualist perspectives. The functionalist dimension examines how Germany and Ukraine employ AI in criminal justice to address analogous issues—such as efficiency, transparency, and rights protection—while the contextualist dimension situates these developments within each country’s constitutional framework, institutional capacity, and socio-political environment, notably the impact of wartime conditions in Ukraine. This combined perspective ensures that similarities and divergences are assessed not in abstraction but against the broader background of European and national legal cultures. The analysis draws on primary law, regulatory instruments, official court and ministerial reports, and peer-reviewed scholarship. Empirical examples include German pilot projects in predictive policing (PRECOBS, KLB-operativ), investigative filtering tools, and administrative AI in courts, as well as Ukraine’s probation risk-assessment algorithm Cassandra and AI-assisted systems for legal research and translation. Experiences from the United States with algorithmic risk assessment are used as a cautionary benchmark.
Results and Conclusions: The study finds that, while both jurisdictions restrict the use of AI as a substitute for core judicial decision-making, Germany leverages its infrastructure, coordinated administration, and legislative oversight to test and evaluate AI tools. By contrast, Ukraine’s integration is more selective, subject to explicit ethical limitations, but hindered by gaps in transparency and the constraints imposed by wartime conditions. The analysis identifies common challenges—including algorithmic bias, explainability, evidentiary admissibility, and the protection of fair trial guarantees—and formulates context-specific recommendations. These include mandatory external audits, codified procedural rights to challenge AI-generated data, clearer evidentiary protocols, and enhanced judicial awareness of AI technologies. The study underscores that the sustainable integration of AI into criminal justice must remain supportive, auditable, and under human control to comply with European legal standards and safeguard fundamental rights.

