How Robust are Model Rankings : A Leaderboard Customization Approach for Equitable Evaluation

Authors

  • Swaroop Mishra Arizona State University
  • Anjana Arunkumar Arizona State University

DOI:

https://doi.org/10.1609/aaai.v35i15.17599

Keywords:

Ethics -- Bias, Fairness, Transparency & Privac, Adversarial Attacks & Robustness, Interpretaility & Analysis of NLP Models, General

Abstract

Models that top leaderboards often perform unsatisfactorily when deployed in real world applications; this has necessitated rigorous and expensive pre-deployment model testing. A hitherto unexplored facet of model performance is: Are our leaderboards doing equitable evaluation? In this paper, we introduce a task-agnostic method to probe leaderboards by weighting samples based on their 'difficulty' level. We find that leaderboards can be adversarially attacked and top performing models may not always be the best models. We subsequently propose alternate evaluation metrics. Our experiments on 10 models show changes in model ranking and an overall reduction in previously reported performance- thus rectifying the overestimation of AI systems' capabilities. Inspired by behavioral testing principles, we further develop a prototype of a visual analytics tool that enables leaderboard revamping through customization, based on an end user's focus area. This helps users analyze models' strengths and weaknesses, and guides them in the selection of a model best suited for their application scenario. In a user study, members of various commercial product development teams, covering 5 focus areas, find that our prototype reduces pre-deployment development and testing effort by 41% on average.

Downloads

Published

2021-05-18

How to Cite

Mishra, S., & Arunkumar, A. (2021). How Robust are Model Rankings : A Leaderboard Customization Approach for Equitable Evaluation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13561-13569. https://doi.org/10.1609/aaai.v35i15.17599

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II