Invariant Representations through Adversarial Forgetting
We propose a novel approach to achieving invariance for deep neural networks in the form of inducing amnesia to unwanted factors of data through a new adversarial forgetting mechanism. We show that the forgetting mechanism serves as an information-bottleneck, which is manipulated by the adversarial training to learn invariance to unwanted factors. Empirical results show that the proposed framework achieves state-of-the-art performance at learning invariance in both nuisance and bias settings on a diverse collection of datasets and tasks.
How to Cite
Jaiswal, A., Moyer, D., Ver Steeg, G., AbdAlmageed, W., & Natarajan, P. (2020). Invariant Representations through Adversarial Forgetting. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4272-4279. https://doi.org/10.1609/aaai.v34i04.5850
AAAI Technical Track: Machine Learning