Learning to Rank: How GNNs Solve Max-Clique and Sparse PCA
DOI:
https://doi.org/10.1609/aaai.v40i30.39734Abstract
Graph neural networks (GNNs) have shown promise on combinatorial problems such as Max-Clique, yet it remains unclear what algorithmic principles they actually learn. This paper introduces a concept-driven framework for evaluating and interpreting GNNs on such tasks. We begin with a principled benchmark based on synthetic graphs with known difficulty levels—easy, medium, and hard—derived from theoretical thresholds for planted cliques. Using this setup, we show that GNNs reliably learn a simple yet powerful concept: degree-based ranking. This insight motivates a new decoder, Least-Probable Removal (LPR), which significantly outperforms the common top-k strategy, especially on harder and real-world instances. Our analysis pipeline connects latent representations to classical heuristics, improving both interpretability and performance. Finally, we demonstrate cross-domain generalization to sparse PCA, showing that the same GNN architecture and decoding strategy succeed in recovering sparse principal components, revealing a shared underlying principle across domains.Downloads
Published
2026-03-14
How to Cite
Shoham, E., Haber, O., Rika, H., & Vilenchik, D. (2026). Learning to Rank: How GNNs Solve Max-Clique and Sparse PCA. Proceedings of the AAAI Conference on Artificial Intelligence, 40(30), 25401–25409. https://doi.org/10.1609/aaai.v40i30.39734
Issue
Section
AAAI Technical Track on Machine Learning VII