1. Introduction. – 2. Methodology. – 3. Evaluating the Kazakhstan AI Law through the Prism of the EU AI Act. – 3.1. Comparative Analysis of AI Definitions and Risk-Based Regulation. – 3.2. The Doctrine of ‘Electronic personhood’ and Algorithmic Accountability. – 3.3. Global Trends and Institutional Adaptation. – 4. Conclusions
Background: This article addresses the critical challenges of establishing a robust legal regime for artificial intelligence (AI) in the wake of the European Union’s Artificial Intelligence Act (Regulation (EU) 2024/1689, EU AI Act) and the Law of the Republic of Kazakhstan ‘On Artificial Intelligence’ (2025 or Kazakhstan AI Law). Despite these legislative efforts, scholarly debates persist regarding AI's legal status, personality, and protective functions. This study highlights a necessary shift from reactive regulation to a preventive, risk-based model in which legal norms adapt to algorithmic behavior before conflicts emerge. The research aims to identify regulatory gaps in the Kazakhstani framework relative to European standards, specifically in the areas of fundamental rights protection and judicial accountability.
Method: The study employs a descriptive-analytical, comparative research methodology. A legal doctrinal analysis of the EU AI Act and the Kazakhstan AI Law was conducted to identify existing regulatory gaps. The formal-legal method was used to evaluate definitions of AI and its legal characteristics. Content analysis of contemporary legal scholarship (2015–2025) provided the basis for legal modeling of the ‘electronic personhood’ status. The research also utilizes a systems approach to categorize AI risks—minimal, medium, and high—within the public administration and law enforcement sectors.
Results and Conclusions: The analysis reveals that while the EU AI Act establishes a comprehensive ban on high-risk technologies such as mass biometric surveillance and predictive policing, the Kazakhstan AI Law lacks similar prohibitions, potentially leading to discriminatory law enforcement practices. The study concludes that recognizing AI as an ‘electronic personhood’ with a hybrid legal capacity is essential for ensuring accountability for autonomous decisions. Specific legislative amendments are proposed for the Kazakhstan AI Law, including: 1) mandating independent expert assessments for high-risk systems; 2) prohibiting the exchange of state secret databases via AI platforms; 3) establishing algorithmic accountability for providers and operators. The research emphasizes that balancing innovation with digital accountability is the key challenge for modernizing the national digital legal order.

