Summary: 1. Introduction. – 2. Methodology. – 3. Results. – 3.1. AI Applications in Anti-Corruption. – 3.2. How AI Systems Perpetuate Bias and Discrimination. – 3.3. Current State of Liability. – 3.4. Responsibility Matrix. – 4. Discussions. – 5. Conclusions.
Background: The question of whether machines can be corrupt appears paradoxical; nevertheless, it is rapidly gaining relevance in the world of artificial intelligence (AI) and changing how decisions are made in public and government systems. These systems offer notable advantages, including enhanced efficiency, reduced human error, and the ability to combat corruption by detecting fraud, tracking funds, and improving public services. It can make decisions based on data instead of personal interests. However, the use of AI is not without risks. When trained on biased datasets, AI systems may produce unfair outcomes. Additionally, if AI systems are deliberately manipulated for personal or political gain, they may support or conceal corrupt actions. This research examines the role of AI in public services, exploring its potential to prevent or contribute to corruption. The goal is to understand where AI is safe and where it is risky.
Methods: The research used a qualitative research design. Data was collected by reviewing academic papers, laws, and official reports. Sources were identified using academic databases such as Google Scholar, with a focus on peer-reviewed law journals, policy briefs, and official government documents. All materials were checked using the CRAAP test. The method for analysing the data was doctrinal legal analysis.
Results and Conclusions: The findings indicate that AI has considerable potential to enhance transparency and reduce bribery by limiting human control in administrative processes. However, in countries with weak legal systems, AI can be misused. When AI systems lack transparency or explainability, they can obscure corrupt practices rather than expose them. This risk is pronounced in high-stakes domains such as public procurement and budgeting systems.
While certain countries have implemented robust legal safeguards and effective audits that mitigate risks, many others lack clear rules on who is responsible when AI contributes to corruption. In numerous cases, public AI systems lack external checks, and existing mechanisms for reporting corruption are not equipped to address AI-specific issues. As a result, accountability gaps persist.
The study highlights the continued importance of human oversight to stop manipulation. It recommends that governments strengthen regulatory frameworks by introducing explicity provisions on accountability. Independent audits should be added to all public AI systems. Whistleblower systems should be updated to accommodate AI-related cases.