LEGAL AND REGULATORY CHALLENGES IN AI-DRIVEN DECISION-MAKING SYSTEMS: IMPLICATIONS FOR ACCOUNTABILITY AND GOVERNANCE

Main Article Content

NIHAD RAMMACH, HASNA SLAMTI

Abstract

The rapid integration of artificial intelligence (AI) into decision-making processes across sectors such as finance, healthcare, governance, and criminal justice has introduced complex legal and regulatory challenges that demand urgent scholarly and institutional attention. AI-driven systems, often characterized by opacity, autonomy, and data-dependency, complicate traditional notions of accountability, liability, and oversight. This study critically examines the legal ambiguities surrounding responsibility attribution when AI systems produce harmful or biased outcomes, particularly in contexts where human intervention is minimal or indirect. It explores the limitations of existing legal frameworks, which were primarily designed for human actors, in addressing issues such as algorithmic bias, data protection, transparency, and due process. The paper further analyzes emerging regulatory approaches, including risk-based frameworks, ethical AI guidelines, and international governance efforts, highlighting both their strengths and gaps. Special attention is given to the tension between innovation and regulation, where overly restrictive policies may hinder technological advancement, while insufficient oversight may expose individuals and institutions to significant risks. The study also evaluates the role of explainability and auditability in enhancing trust and compliance, emphasizing the need for interdisciplinary collaboration between legal scholars, technologists, and policymakers. Ultimately, the research underscores the necessity of developing adaptive, forward-looking regulatory models that can accommodate the dynamic nature of AI technologies. It proposes a governance paradigm that integrates legal accountability mechanisms with ethical standards and technical safeguards to ensure responsible AI deployment. The findings contribute to ongoing debates on how best to align AI innovation with fundamental legal principles, including fairness, transparency, and justice, thereby promoting sustainable and trustworthy AI ecosystems.

Article Details

Section
Articles

References

M. Ananny and K. Crawford, “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability,” New Media & Society, vol. 20, no. 3, pp. 973–989, 2018, doi: 10.1177/1461444816676645.

M. Bovens, “Analysing and assessing accountability: A conceptual framework,” European Law Journal, vol. 13, no. 4, pp. 447–468, 2007, doi: 10.1111/j.1468-0386.2007.00378.x.

C. Cath, “Governing artificial intelligence: Ethical, legal and technical opportunities and challenges,” Philosophical Transactions of the Royal Society A, vol. 376, no. 2133, p. 20180080, 2018, doi: 10.1098/rsta.2018.0080.

B. Custers, H. Ursic, and J. Hudobivnik, “Big data and data protection: How to reconcile the two?” Computer Law & Security Review, vol. 35, no. 5, p. 105342, 2019, doi: 10.1016/j.clsr.2019.05.002.

P. De Hert and V. Papakonstantinou, “The new General Data Protection Regulation: Still a sound system for the protection of individuals?” Computer Law & Security Review, vol. 32, no. 2, pp. 179–194, 2016, doi: 10.1016/j.clsr.2016.02.006.

N. Diakopoulos, “Accountability in algorithmic decision making,” Communications of the ACM, vol. 59, no. 2, pp. 56–62, 2016, doi: 10.1145/2844110.

F. Doshi-Velez and B. Kim, “Towards a rigorous science of interpretable machine learning,” arXiv preprint arXiv:1702.08608, 2017.

Y. K. Dwivedi et al., “Artificial intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy,” International Journal of Information Management, vol. 57, p. 101994, 2021, doi: 10.1016/j.ijinfomgt.2019.08.002.

M. Ebers, “Regulating artificial intelligence and robotics: Ethical and legal challenges,” European Journal of Risk Regulation, vol. 12, no. 1, pp. 1–8, 2021, doi: 10.1017/err.2020.88.

L. Floridi et al., “AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations,” Minds and Machines, vol. 28, no. 4, pp. 689–707, 2018, doi: 10.1007/s11023-018-9482-5.

U. Gasser and V. A. F. Almeida, “A layered model for AI governance,” IEEE Internet Computing, vol. 21, no. 6, pp. 58–62, 2017, doi: 10.1109/MIC.2017.4180835.

R. Guidotti et al., “A survey of methods for explaining black box models,” ACM Computing Surveys, vol. 51, no. 5, p. 93, 2018, doi: 10.1145/3236009.

A. Jobin, M. Ienca, and E. Vayena, “The global landscape of AI ethics guidelines,” Nature Machine Intelligence, vol. 1, no. 9, pp. 389–399, 2019, doi: 10.1038/s42256-019-0088-2.

M. I. Jordan and T. M. Mitchell, “Machine learning: Trends, perspectives, and prospects,” Science, vol. 349, no. 6245, pp. 255–260, 2015, doi: 10.1126/science.aaa8415.

A. M. Kaplan and M. Haenlein, “Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence,” Business Horizons, vol. 62, no. 1, pp. 15–25, 2019, doi: 10.1016/j.bushor.2018.08.004.

J. Kemper and D. Kolkman, “Transparent to whom? No algorithmic accountability without a critical audience,” Information, Communication & Society, vol. 22, no. 14, pp. 2081–2096, 2019, doi: 10.1080/1369118X.2018.1477967.

J. A. Kroll et al., “Accountable algorithms,” University of Pennsylvania Law Review, vol. 165, no. 3, pp. 633–705, 2017.

Z. C. Lipton, “The mythos of model interpretability,” Queue, vol. 16, no. 3, pp. 31–57, 2018, doi: 10.1145/3236386.3241340.

S. Makridakis, “The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms,” Futures, vol. 90, pp. 46–60, 2017, doi: 10.1016/j.futures.2017.03.006.

A. Mantelero, “AI and big data: A blueprint for a human rights, social and ethical impact assessment,” Computer Law & Security Review, vol. 34, no. 4, pp. 754–772, 2018, doi: 10.1016/j.clsr.2018.05.017.

G. E. Marchant, B. R. Allenby, and J. R. Herkert, “The growing gap between emerging technologies and legal-ethical oversight: The case of nanotechnology,” Journal of Law, Medicine & Ethics, vol. 39, no. 2, pp. 308–320, 2011, doi: 10.1111/j.1748-720X.2011.00581.x.

T. Miller, “Explanation in artificial intelligence: Insights from the social sciences,” Artificial Intelligence, vol. 267, pp. 1–38, 2019, doi: 10.1016/j.artint.2018.07.007.

B. D. Mittelstadt et al., “The ethics of algorithms: Mapping the debate,” Big Data & Society, vol. 3, no. 2, pp. 1–21, 2016, doi: 10.1177/2053951716679679.

U. Pagallo, “The laws of robots: Crimes, contracts, and torts,” AI & Society, vol. 28, no. 3, pp. 347–350, 2013, doi: 10.1007/s00146-012-0403-0.

W. Samek, T. Wiegand, and K.-R. Müller, “Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models,” arXiv preprint arXiv:1708.08296, 2017.

O. Tene and J. Polonetsky, “Privacy in the age of big data: A time for big decisions,” Stanford Law Review Online, vol. 64, pp. 63–69, 2012.

E. J. Topol, “High-performance medicine: The convergence of human and artificial intelligence,” Nature Medicine, vol. 25, no. 1, pp. 44–56, 2019, doi: 10.1038/s41591-018-0300-7.

M. Veale and F. Z. Borgesius, “Demystifying the draft EU Artificial Intelligence Act,” Computer Law Review International, vol. 22, no. 4, pp. 97–112, 2021, doi: 10.9785/cri-2021-220402.

P. Voigt and A. Von dem Bussche, “The EU General Data Protection Regulation (GDPR): A practical guide,” Computer Law Review International, vol. 18, no. 3, pp. 65–72, 2017, doi: 10.9785/cri-2017-180301.

M. Wieringa, “What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability,” Proceedings of the ACM on Human-Computer Interaction, vol. 4, no. CSCW2, pp. 1–18, 2020, doi: 10.1145/3415237.