1. Introduction – 2. Methodology – 3. Results – 4. Discussion – 5. Conclusions.
Background: Autonomous weapons systems (AWS) and related emerging technologies are increasingly embedded in surveillance and decision-support architectures relevant to nuclear disarmament verification. This evolution intensifies concerns about accountability, human control and the reliability of evidentiary material generated by complex, opaque systems, including their downstream impact on fair-trial guarantees, evidentiary standards and the availability of effective remedies when such material is relied upon in judicial or quasi-judicial proceedings. The article asks whether existing international laws, especially nuclear disarmament treaties, international humanitarian law and general rules on state responsibility, adequately regulate the deployment of AWS-enabled capabilities in verification, or whether specific normative adaptations are required. By focusing on verification rather than battlefield use, the study highlights an underexplored dimension of the AWS debate and shows its significance for the credibility and sustainability of nuclear disarmament arrangements.
Methods: The research relies on doctrinal and comparative legal analysis conducted by the authors, with artificial intelligence tools used solely for auxiliary tasks such as literature retrieval, material organisation, and preliminary screening of state practice, while all legal interpretations and normative assessments remain the independent work of the researcher. The study examines treaty regimes governing nuclear disarmament and non-proliferation, relevant soft-law instruments and the practice of international organisations involved in verification. It also compares policy documents and statements from multilateral forums concerning lethal AWS, verification technologies and the notion of meaningful human control to identify converging and diverging legal positions and emerging interpretive trends.
Results and Conclusions: Existing international law offers an essential but incomplete framework for regulating AWS-related verification. General principles of due diligence, precaution, proportionality, state responsibility and individual criminal liability apply, but they do not resolve challenges linked to high autonomy, algorithmic opacity and the delegation of legally significant judgments to machines. Future nuclear disarmament verification should therefore include explicit legal standards on meaningful human control, transparency, auditability and data governance for AI-enabled systems, with clear rules on attribution and review of machine-generated evidence. Developing concrete verification protocols, interpretive understandings, and institutional oversight to implement these standards would enhance legal certainty, scholarly and practical coherence, and confidence in verification, while preserving contestability, transparency and effective avenues of redress where autonomous outputs underpin allegations of non-compliance or individual responsibility.

