Legal Challenges Resulting from the Use of Artificial Intelligence in the Field of Cybersecurity According to Saudi Domestic Regulations and International Regulations
Main Article Content
Abstract
This study examines the legal risks associated with the use of Artificial Intelligence (AI) in cybersecurity through a comparative analysis of the legal framework of the Kingdom of Saudi Arabia and relevant international instruments. As AI technologies increasingly underpin sophisticated cross-border cyberattacks, understanding their legal implications has become both urgent and essential.
Employing a descriptive-analytical methodology, the research reviews and analyses key Saudi legal instruments — including the Personal Data Protection Law, the Anti-Cybercrime Law, and the National Cybersecurity Policy — and compares them with major international frameworks such as the General Data Protection Regulation (GDPR), the Budapest Convention on Cybercrime, and the ENISA Guidelines.
The findings highlight notable convergence in areas related to data protection and the criminalization of cyberattacks yet, reveal significant divergence regarding the attribution of legal liability and the mechanisms for international cooperation. The study further identifies critical legal gaps, particularly the absence of explicit provisions governing AI system liability and the lack of harmonized standards for managing cross-border cybersecurity risks.
Accordingly, the study recommends the continuous development of both domestic and international legal frameworks to keep pace with AI’s rapid evolution, alongside the establishment of unified regulatory standards that strengthen cybersecurity governance and ensure effective data protection against emerging technological threats.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.