Administrative Liability for Decisions Issued by Artificial Intelligence Systems: Legal Challenges and Regulatory Prospects
Main Article Content
Abstract
This research addresses the problematic nature of administrative liability for decisions issued by artificial intelligence (AI) systems, highlighting the ongoing tension between the necessity of adopting smart technologies to enhance public administration efficiency and the inadequacy of traditional administrative law rules in encompassing the unique legal challenges posed by these systems, such as "algorithmic opacity" (the Black Box) and algorithmic bias.
The research is divided into two main sections: The first section reviews the legal characterization of smart administrative decisions and the fundamental distinction between assistive and autonomous systems, along with the resulting shifts in the concept of administrative will. It analyzes the key legal issues arising in this context, most notably: the difficulty of assigning fault given the multiplicity of actors (the administration, developers, data providers), the violation of transparency principles and the right to defense, and the complexity of proving the causal link between system defects and the harm caused to individuals.
The second section analyzes the diverse international regulatory models for addressing this issue, focusing on two primary frameworks: the European model represented by the "AI Act" based on risk classification, and the legislative and regulatory reality in the Kingdom of Saudi Arabia under Vision 2030 and the efforts of the Saudi Data and AI Authority (SDAIA).
The research concludes that achieving an effective balance requires moving beyond the traditional "service fault" theory toward adopting "risk-based liability" and proposing a regulatory framework that combines effective human oversight (Human-in-the-loop) with "Liability by Design" mechanisms. The study further concludes the necessity of updating Saudi administrative legislation to adopt an integrated regulatory model that ensures algorithmic transparency and provides fair remedies for affected individuals, thereby preventing smart systems from becoming tools for arbitrary decision-making beyond judicial review.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.