Towards Interpretation of Pairwise Learning


  • Mengdi Huai University of Virginia
  • Di Wang State University of New York at Buffalo
  • Chenglin Miao State University of New York at Buffalo
  • Aidong Zhang University of Virginia



Recently, there are increasingly more attentions paid to an important family of learning problems called pairwise learning, in which the associated loss functions depend on pairs of instances. Despite the tremendous success of pairwise learning in many real-world applications, the lack of transparency behind the learned pairwise models makes it difficult for users to understand how particular decisions are made by these models, which further impedes users from trusting the predicted results. To tackle this problem, in this paper, we study feature importance scoring as a specific approach to the problem of interpreting the predictions of black-box pairwise models. Specifically, we first propose a novel adaptive Shapley-value-based interpretation method, based on which a vector of importance scores associated with the underlying features of a testing instance pair can be adaptively calculated with the consideration of feature correlations, and these scores can be used to indicate which features make key contributions to the final prediction. Considering that Shapley-value-based methods are usually computationally challenging, we further propose a novel robust approximation interpretation method for pairwise models. This method is not only much more efficient but also robust to data noise. To the best of our knowledge, we are the first to investigate how to enable interpretation in pairwise learning. Theoretical analysis and extensive experiments demonstrate the effectiveness of the proposed methods.




How to Cite

Huai, M., Wang, D., Miao, C., & Zhang, A. (2020). Towards Interpretation of Pairwise Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4166-4173.



AAAI Technical Track: Machine Learning