Identifying Model Weakness with Adversarial Examiner


  • Michelle Shu Johns Hopkins University
  • Chenxi Liu Johns Hopkins University
  • Weichao Qiu Johns Hopkins University
  • Alan Yuille Johns Hopkins University



Machine learning models are usually evaluated according to the average case performance on the test set. However, this is not always ideal, because in some sensitive domains (e.g. autonomous driving), it is the worst case performance that matters more. In this paper, we are interested in systematic exploration of the input data space to identify the weakness of the model to be evaluated. We propose to use an adversarial examiner in the testing stage. Different from the existing strategy to always give the same (distribution of) test data, the adversarial examiner will dynamically select the next test data to hand out based on the testing history so far, with the goal being to undermine the model's performance. This sequence of test data not only helps us understand the current model, but also serves as constructive feedback to help improve the model in the next iteration. We conduct experiments on ShapeNet object classification. We show that our adversarial examiner can successfully put more emphasis on the weakness of the model, preventing performance estimates from being overly optimistic.




How to Cite

Shu, M., Liu, C., Qiu, W., & Yuille, A. (2020). Identifying Model Weakness with Adversarial Examiner. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11998-12006.



AAAI Technical Track: Vision