Examining CNN Representations With Respect to Dataset Bias

Authors

  • Quanshi Zhang University of California, Los Angeles
  • Wenguan Wang Beijing Institute of Technology
  • Song-Chun Zhu University of California, Los Angeles

DOI:

https://doi.org/10.1609/aaai.v32i1.11833

Keywords:

Deep learning, Interpretable model, Knowledge representation

Abstract

Given a pre-trained CNN without any testing samples, this paper proposes a simple yet effective method to diagnose feature representations of the CNN. We aim to discover representation flaws caused by potential dataset bias. More specifically, when the CNN is trained to estimate image attributes, we mine latent relationships between representations of different attributes inside the CNN. Then, we compare the mined attribute relationships with ground-truth attribute relationships to discover the CNN's blind spots and failure modes due to dataset bias. In fact, representation flaws caused by dataset bias cannot be examined by conventional evaluation strategies based on testing images, because testing images may also have a similar bias. Experiments have demonstrated the effectiveness of our method.

Downloads

Published

2018-04-29

How to Cite

Zhang, Q., Wang, W., & Zhu, S.-C. (2018). Examining CNN Representations With Respect to Dataset Bias. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11833