HogVul: Black-box Adversarial Code Generation Framework Against LM-based Vulnerability Detectors
DOI:
https://doi.org/10.1609/aaai.v40i2.37118Abstract
Recent advances in software vulnerability detection have been driven by Language Model (LM)-based approaches. However, these models remain vulnerable to adversarial attacks that exploit lexical and syntax perturbations, allowing critical flaws to evade detection. Existing black-box attacks on LM-based vulnerability detectors primarily rely on isolated perturbation strategies, limiting their ability to efficiently explore the adversarial code space for optimal perturbations. To bridge this gap, we propose HogVul, a black-box adversarial code generation framework that integrates both lexical and syntax perturbations under a unified dual-channel optimization strategy driven by Particle Swarm Optimization (PSO). By systematically coordinating two-level perturbations, HogVul effectively expands the search space for adversarial examples, enhancing the attack efficacy. Extensive experiments on four benchmark datasets demonstrate that HogVul achieves an average attack success rate improvement of 26.05% over state-of-the-art baseline methods. These findings highlight the potential of hybrid optimization strategies in exposing model vulnerabilities.Downloads
Published
2026-03-14
How to Cite
Yang, J., He, P., Du, T., Bing, S., & Zhang, X. (2026). HogVul: Black-box Adversarial Code Generation Framework Against LM-based Vulnerability Detectors. Proceedings of the AAAI Conference on Artificial Intelligence, 40(2), 1435-1443. https://doi.org/10.1609/aaai.v40i2.37118
Issue
Section
AAAI Technical Track on Application Domains II