Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders
DOI:
https://doi.org/10.1609/aies.v7i1.31669Abstract
The responsible AI (RAI) community has introduced numerous processes and artifacts---such as Model Cards, Transparency Notes, and Data Cards---to facilitate transparency and support the governance of AI systems. While originally designed to scaffold and document AI development processes in technology companies, these artifacts are becoming central components of regulatory compliance under recent regulations such as the EU AI Act. Much of the existing literature has focussed primarily on the design of new RAI artifacts, or an examination of their use by practitioners within technology companies. However, as RAI artifacts begin to play key roles in enabling external oversight, it becomes critical to understand how stakeholders---particularly stakeholders situated outside of technology companies who govern and audit industry AI deployments---perceive the efficacy of RAI artifacts. In this study, we conduct semi-structured interviews and design activities with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts. While participants believe that RAI artifacts are a valuable contribution to the RAI ecosystem, many have concerns around their potential unintended and longer-term impacts on actors outside of technology companies (e.g., downstream end-users, policymakers, civil society stakeholders). We organized these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry, impeding civil society and legal stakeholders' ability to protect downstream end-users from potential AI harms. Participants envision how structural changes, along with changes in how RAI artifacts are designed, used, and governed, could help re-direct the role and impacts of artifacts in the RAI ecosystem. Drawing on these findings, we discuss research and policy implications for RAI artifacts.Downloads
Published
2024-10-16
How to Cite
Kawakami, A., Wilkinson, D., & Chouldechova, A. (2024). Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 670-682. https://doi.org/10.1609/aies.v7i1.31669
Issue
Section
Full Archival Papers