Towards Decision-Friendly AUC: Learning Multi-Classifier with AUCµ

Authors

  • Peifeng Gao School of Computer Science and Tech., University of Chinese Academy of Sciences
  • Qianqian Xu Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences
  • Peisong Wen School of Computer Science and Tech., University of Chinese Academy of Sciences Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences
  • Huiyang Shao School of Computer Science and Tech., University of Chinese Academy of Sciences Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences
  • Yuan He Alibaba Group
  • Qingming Huang School of Computer Science and Tech., University of Chinese Academy of Sciences Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences BDKM, University of Chinese Academy of Sciences Peng Cheng Laboratory

DOI:

https://doi.org/10.1609/aaai.v37i6.25926

Keywords:

ML: Multi-Class/Multi-Label Learning & Extreme Classification, ML: Classification and Regression

Abstract

Area Under the ROC Curve (AUC) is a widely used ranking metric in imbalanced learning due to its insensitivity to label distributions. As a well-known multiclass extension of AUC, Multiclass AUC (MAUC, a.k.a. M-metric) measures the average AUC of multiple binary classifiers. In this paper, we argue that simply optimizing MAUC is far from enough for imbalanced multi-classification. More precisely, MAUC only focuses on learning scoring functions via ranking optimization, while leaving the decision process unconsidered. Therefore, scoring functions being able to make good decisions might suffer from low performance in terms of MAUC. To overcome this issue, we turn to explore AUCµ, another multiclass variant of AUC, which further takes the decision process into consideration. Motivated by this fact, we propose a surrogate risk optimization framework to improve model performance from the perspective of AUCµ. Practically, we propose a two-stage training framework for multi-classification, where at the first stage a scoring function is learned maximizing AUCµ, and at the second stage we seek for a decision function to improve the F1-metric via our proposed soft F1. Theoretically, we first provide sufficient conditions that optimizing the surrogate losses could lead to the Bayes optimal scoring function. Afterward, we show that the proposed surrogate risk enjoys a generalization bound in order of O(1/√N). Experimental results on four benchmark datasets demonstrate the effectiveness of our proposed method in both AUCµ and F1-metric.

Downloads

Published

2023-06-26

How to Cite

Gao, P., Xu, Q., Wen, P., Shao, H., He, Y., & Huang, Q. (2023). Towards Decision-Friendly AUC: Learning Multi-Classifier with AUCµ. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7633-7641. https://doi.org/10.1609/aaai.v37i6.25926

Issue

Section

AAAI Technical Track on Machine Learning I