Summary: 1. Introduction. – 2. Methodology. – 3. Results and Discussion. – 3.1. Ethical and Copyright Challenges. – 3.1.1. Problems of Legal Regulation of Artificial Intelligence. – 3.1.2. Problems of Normative and Technical Regulation of Artificial Intelligence – 3.1.3. Problems of Ethical Regulation of Artificial Intelligence. – 3.2. Institutional and Procedural Safeguards. – 3.3 Liability and Protection of Citizens’ Rights. – 4. Conclusions.
Background: The active integration of artificial intelligence (AI) into diverse spheres of human activity has created significant opportunities for innovation and efficiency, while simultaneously raising complex ethical, legal, and social challenges. Among these, the deployment of high-risk AI systems requires particular scrutiny due to their potential impact on fundamental rights, public safety, and socio-economic relations. This research examines both the benefits and risks of AI technologies, with an emphasis on the need to establish clear legal and regulatory frameworks at the national and international levels.
Methods: The study employs a comparative legal analysis of existing regulatory approaches, including the European Union’s AI Act (EU AI Act), the OECD AI Principles, and national legislative practices. The methodology is based on a systematic review of normative legal acts, doctrinal sources, and policy papers, as well as an evaluation of prospective risks associated with the use of high-risk AI systems in various sectors, including transport, healthcare, and financial services.
Results and conclusions: The analysis reveals that, while the adoption of AI contributes to economic development, efficiency in public administration, and improved quality of services, it also generates risks such as discrimination, violations of privacy, cyberthreats, and reduced accountability. In particular, the study highlights that existing legislation in Kazakhstan, as in many other jurisdictions, does not sufficiently address the specificities of high-risk AI systems. Comparative legal analysis demonstrates that the most effective regulatory models are risk-oriented, ensuring transparency, human oversight, and liability mechanisms. The findings suggest that partial amendments to existing legislation—such as in the areas of mandatory insurance and consumer protection—could serve as an interim measure, while the adoption of a dedicated AI law may be necessary in the long term.
The study underscores the need for a balanced legal framework that harmonises technological innovation with the protection of human rights and societal interests. It is argued that Kazakhstan, while considering international best practices, should pursue a two-stage approach: (1) introducing targeted amendments to sectoral legislation; and (2) elaborating a comprehensive AI law focused on high-risk systems. Such a framework would mitigate risks, ensure accountability, and foster public trust, while promoting the responsible and sustainable use of artificial intelligence.

