Decoding accountability: the importance of explainability in liability frameworks for smart border systems
Article
Nuawuchi, U and George, C. 2025. Decoding accountability: the importance of explainability in liability frameworks for smart border systems. Discover Computing. 28. https://doi.org/10.1007/s10791-025-09559-5
Type | Article |
---|---|
Title | Decoding accountability: the importance of explainability in liability frameworks for smart border systems |
Authors | Nuawuchi, U and George, C. |
Abstract | This paper examines the challenges posed by Automated Decision-Making systems (ADMs) in border control, focusing on the limitations of the proposed AI Liability Directive (AILD)—now withdrawn– in addressing potential harms. We identify key issues within the AILD, including the plausibility requirement, knowledge paradox, and the exclusion of human-in-the-loop, which create significant barriers for claimants seeking redress. Although now withdrawn, the commission is contemplating putting up a new proposal for the AI Liability regime; if the new proposal is anything like the AILD (now withdrawn), there is a need to address the substantial shortcomings discovered in the AILD. To address these shortcomings, we propose integrating sui generis explainability requirements into the AILD framework or mandatory compliance with Article 86 of the Artificial Intelligence Act (AIA), notwithstanding its ineffectiveness. This approach aims to bridge knowledge and liability gaps, empower claimants, and enhance transparency in AI decision-making processes. Our recommendations include expanding the disclosure requirements to incorporate a sui generis explainability requirement, implementing a tiered plausibility standard, and introducing regulatory sandboxes. These measures seek to engender accountability and fairness. With the refinement of the AILD in mind, these considerations aim to influence and make recommendations for any future proposals for an AI liability regime and to foster a regulatory environment that encourages both the development and use of AI technologies to be responsible and accountable, ensuring that AI-driven or smart border control systems enhance security and efficiency while upholding fundamental rights and human dignity. |
Keywords | Artificial Intelligence |
Sustainable Development Goals | 9 Industry, innovation and infrastructure |
Middlesex University Theme | Sustainability |
Research Group | Aspects of Law and Ethics Related to Technology group |
Publisher | Springer |
Journal | Discover Computing |
ISSN | |
Electronic | 2948-2992 |
Publication dates | |
Online | 04 May 2025 |
Publication process dates | |
Submitted | 30 Nov 2024 |
Accepted | 15 Apr 2025 |
Deposited | 07 May 2025 |
Output status | Published |
Publisher's version | License File Access Level Open |
Copyright Statement | This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
Digital Object Identifier (DOI) | https://doi.org/10.1007/s10791-025-09559-5 |
Language | English |
https://repository.mdx.ac.uk/item/2414y6
Download files
13
total views5
total downloads13
views this month5
downloads this month