An Information Theoretic Evaluation Metric for Strong Unlearning

Authors

  • Dongjae Jeon Yonsei University
  • Wonje Jeung Yonsei University
  • Taeheon Kim Seoul National University
  • Albert No Yonsei University
  • Jonghyun Choi Seoul National University

DOI:

https://doi.org/10.1609/aaai.v40i26.39373

Abstract

Machine unlearning (MU) aims to remove the influence of specific data from trained models, addressing privacy concerns and ensuring compliance with regulations such as the "right to be forgotten." Evaluating strong unlearning, where the unlearned model is indistinguishable from one retrained without the forgetting data, remains a significant challenge in deep neural networks (DNNs). Common black-box metrics, such as variants of membership inference attacks and accuracy comparisons, primarily assess model outputs but often fail to capture residual information in intermediate layers. To bridge this gap, we introduce the Information Difference Index (IDI), a novel white-box metric inspired by information theory. IDI quantifies retained information in intermediate features by measuring mutual information between those features and the labels to be forgotten, offering a more comprehensive assessment of unlearning efficacy. Our experiments demonstrate that IDI effectively measures the degree of unlearning across various datasets and architectures, providing a reliable tool for evaluating strong unlearning in DNNs.

Downloads

Published

2026-03-14

How to Cite

Jeon, D., Jeung, W., Kim, T., No, A., & Choi, J. (2026). An Information Theoretic Evaluation Metric for Strong Unlearning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(26), 22173–22181. https://doi.org/10.1609/aaai.v40i26.39373

Issue

Section

AAAI Technical Track on Machine Learning III