Learning to Unlearn, Failing to Forget? Assessing Machine Unlearning Through Ethics and Epistemology
DOI:
https://doi.org/10.1609/aies.v8i1.36542Abstract
Machine Unlearning (MU) aims to remove the influence of unwanted data from trained AI models, driven by ethical/legal concerns like privacy (e.g., the Right to be Forgotten), bias mitigation, security, and copyright protection. This paper critically examines MU, arguing that it is currently unclear whether its technical methods and ethical goals are suitably aligned. Currently, important questions around what MU does, what it should do, and how its efforts align with stakeholder needs remain unaddressed. Drawing on insights from social epistemology and the ethics of forgetting, the paper makes progress in clarifying what MU is and whether it aligns with the relevant goals. It does so by distinguishing three different senses of unlearning that vary in regard to what stakeholder needs they can cater to. Building upon cases regarding copyright and data privacy, the paper highlights potential alignment gaps between MU’s methods and its wider goals, and emphasizes the need for more concrete guidelines to assess MU’s effectiveness, clearer ethical foundations, and improved stakeholder engagement.Downloads
Published
2025-10-15
How to Cite
Aslam, I., Khosrowi, D., & Nagshi, R. (2025). Learning to Unlearn, Failing to Forget? Assessing Machine Unlearning Through Ethics and Epistemology. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(1), 204-216. https://doi.org/10.1609/aies.v8i1.36542