Finding Interpretable Class-Specific Patterns through Efficient Neural Search

Authors

  • Nils Philipp Walter CISPA Helmholtz Center for Information Security
  • Jonas Fischer Harvard University
  • Jilles Vreeken CISPA Helmholtz Center for Information Security

DOI:

https://doi.org/10.1609/aaai.v38i8.28756

Keywords:

DMKM: Rule Mining & Pattern Mining, ML: Neuro-Symbolic Learning, ML: Scalability of ML Systems, ML: Transparent, Interpretable, Explainable ML

Abstract

Discovering patterns in data that best describe the differences between classes allows to hypothesize and reason about class-specific mechanisms. In molecular biology, for example, these bear the promise of advancing the understanding of cellular processes differing between tissues or diseases, which could lead to novel treatments. To be useful in practice, methods that tackle the problem of finding such differential patterns have to be readily interpretable by domain experts, and scalable to the extremely high-dimensional data. In this work, we propose a novel, inherently interpretable binary neural network architecture Diffnaps that extracts differential patterns from data. Diffnaps is scalable to hundreds of thousands of features and robust to noise, thus overcoming the limitations of current state-of-the-art methods in large-scale applications such as in biology. We show on synthetic and real world data, including three biological applications, that unlike its competitors, Diffnaps consistently yields accurate, succinct, and interpretable class descriptions.

Downloads

Published

2024-03-24

How to Cite

Walter, N. P., Fischer, J., & Vreeken, J. (2024). Finding Interpretable Class-Specific Patterns through Efficient Neural Search. Proceedings of the AAAI Conference on Artificial Intelligence, 38(8), 9062-9070. https://doi.org/10.1609/aaai.v38i8.28756

Issue

Section

AAAI Technical Track on Data Mining & Knowledge Management