Identifying Model Weakness with Adversarial Examiner

Authors

  • Michelle Shu Johns Hopkins University
  • Chenxi Liu Johns Hopkins University
  • Weichao Qiu Johns Hopkins University
  • Alan Yuille Johns Hopkins University

DOI:

https://doi.org/10.1609/aaai.v34i07.6876

Abstract

Machine learning models are usually evaluated according to the average case performance on the test set. However, this is not always ideal, because in some sensitive domains (e.g. autonomous driving), it is the worst case performance that matters more. In this paper, we are interested in systematic exploration of the input data space to identify the weakness of the model to be evaluated. We propose to use an adversarial examiner in the testing stage. Different from the existing strategy to always give the same (distribution of) test data, the adversarial examiner will dynamically select the next test data to hand out based on the testing history so far, with the goal being to undermine the model's performance. This sequence of test data not only helps us understand the current model, but also serves as constructive feedback to help improve the model in the next iteration. We conduct experiments on ShapeNet object classification. We show that our adversarial examiner can successfully put more emphasis on the weakness of the model, preventing performance estimates from being overly optimistic.

Downloads

Published

2020-04-03

How to Cite

Shu, M., Liu, C., Qiu, W., & Yuille, A. (2020). Identifying Model Weakness with Adversarial Examiner. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11998-12006. https://doi.org/10.1609/aaai.v34i07.6876

Issue

Section

AAAI Technical Track: Vision