ADMoE: Anomaly Detection with Mixture-of-Experts from Noisy Labels


  • Yue Zhao Carnegie Mellon University
  • Guoqing Zheng Microsoft
  • Subhabrata Mukherjee Microsoft
  • Robert McCann Microsoft
  • Ahmed Awadallah Microsoft



DMKM: Anomaly/Outlier Detection, ML: Applications


Existing works on anomaly detection (AD) rely on clean labels from human annotators that are expensive to acquire in practice. In this work, we propose a method to leverage weak/noisy labels (e.g., risk scores generated by machine rules for detecting malware) that are cheaper to obtain for anomaly detection. Specifically, we propose ADMoE, the first framework for anomaly detection algorithms to learn from noisy labels. In a nutshell, ADMoE leverages mixture-of-experts (MoE) architecture to encourage specialized and scalable learning from multiple noisy sources. It captures the similarities among noisy labels by sharing most model parameters, while encouraging specialization by building "expert" sub-networks. To further juice out the signals from noisy labels, ADMoE uses them as input features to facilitate expert learning. Extensive results on eight datasets (including a proprietary enterprise security dataset) demonstrate the effectiveness of ADMoE, where it brings up to 34% performance improvement over not using it. Also, it outperforms a total of 13 leading baselines with equivalent network parameters and FLOPS. Notably, ADMoE is model-agnostic to enable any neural network-based detection methods to handle noisy labels, where we showcase its results on both multiple-layer perceptron (MLP) and the leading AD method DeepSAD.




How to Cite

Zhao, Y., Zheng, G., Mukherjee, S., McCann, R., & Awadallah, A. (2023). ADMoE: Anomaly Detection with Mixture-of-Experts from Noisy Labels. Proceedings of the AAAI Conference on Artificial Intelligence, 37(4), 4937-4945.



AAAI Technical Track on Data Mining and Knowledge Management