Feature-Space Bayesian Adversarial Learning Improved Malware Detector Robustness

Authors

  • Bao Gia Doan The University of Adelaide
  • Shuiqiao Yang UNSW
  • Paul Montague DST
  • Olivier De Vel CSIRO Data61
  • Tamas Abraham DST
  • Seyit Camtepe CSIRO Data61
  • Salil S. Kanhere UNSW Sydney
  • Ehsan Abbasnejad The University of Adelaide
  • Damith C. Ranashinghe The University of Adelaide

DOI:

https://doi.org/10.1609/aaai.v37i12.26727

Keywords:

General

Abstract

We present a new algorithm to train a robust malware detector. Malware is a prolific problem and malware detectors are a front-line defense. Modern detectors rely on machine learning algorithms. Now, the adversarial objective is to devise alterations to the malware code to decrease the chance of being detected whilst preserving the functionality and realism of the malware. Adversarial learning is effective in improving robustness but generating functional and realistic adversarial malware samples is non-trivial. Because: i) in contrast to tasks capable of using gradient-based feedback, adversarial learning in a domain without a differentiable mapping function from the problem space (malware code inputs) to the feature space is hard; and ii) it is difficult to ensure the adversarial malware is realistic and functional. This presents a challenge for developing scalable adversarial machine learning algorithms for large datasets at a production or commercial scale to realize robust malware detectors. We propose an alternative; perform adversarial learning in the feature space in contrast to the problem space. We prove the projection of perturbed, yet valid malware, in the problem space into feature space will always be a subset of adversarials generated in the feature space. Hence, by generating a robust network against feature-space adversarial examples, we inherently achieve robustness against problem-space adversarial examples. We formulate a Bayesian adversarial learning objective that captures the distribution of models for improved robustness. To explain the robustness of the Bayesian adversarial learning algorithm, we prove that our learning method bounds the difference between the adversarial risk and empirical risk and improves robustness. We show that Bayesian neural networks (BNNs) achieve state-of-the-art results; especially in the False Positive Rate (FPR) regime. Adversarially trained BNNs achieve state-of-the-art robustness. Notably, adversarially trained BNNs are robust against stronger attacks with larger attack budgets by a margin of up to 15% on a recent production-scale malware dataset of more than 20 million samples. Importantly, our efforts create a benchmark for future defenses in the malware domain.

Downloads

Published

2023-06-26

How to Cite

Doan, B. G., Yang, S., Montague, P., De Vel, O., Abraham, T., Camtepe, S., Kanhere, S. S., Abbasnejad, E., & Ranashinghe, D. C. (2023). Feature-Space Bayesian Adversarial Learning Improved Malware Detector Robustness. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14783-14791. https://doi.org/10.1609/aaai.v37i12.26727

Issue

Section

AAAI Special Track on Safe and Robust AI