Revisiting Disentanglement in Downstream Tasks: A Study on Its Necessity for Abstract Visual Reasoning
DOI:
https://doi.org/10.1609/aaai.v38i13.29354Keywords:
ML: Representation LearningAbstract
In representation learning, a disentangled representation is highly desirable as it encodes generative factors of data in a separable and compact pattern. Researchers have advocated leveraging disentangled representations to complete downstream tasks with encouraging empirical evidence. This paper further investigates the necessity of disentangled representation in downstream applications. Specifically, we show that dimension-wise disentangled representations are unnecessary on a fundamental downstream task, abstract visual reasoning. We provide extensive empirical evidence against the necessity of disentanglement, covering multiple datasets, representation learning methods, and downstream network architectures. Furthermore, our findings suggest that the informativeness of representations is a better indicator of downstream performance than disentanglement. Finally, the positive correlation between informativeness and disentanglement explains the claimed usefulness of disentangled representations in previous works. The source code is available at https://github.com/Richard-coder-Nai/disentanglement-lib-necessity.gitDownloads
Published
2024-03-24
How to Cite
Nai, R., Wen, Z., Li, J., Li, Y., & Gao, Y. (2024). Revisiting Disentanglement in Downstream Tasks: A Study on Its Necessity for Abstract Visual Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14405-14413. https://doi.org/10.1609/aaai.v38i13.29354
Issue
Section
AAAI Technical Track on Machine Learning IV