Reinforcement Learning Platform for Adversarial Black-box Attacks with Custom Distortion Filters

Authors

  • Soumyendu Sarkar Hewlett Packard Enterprise
  • Ashwin Ramesh Babu Hewlett Packard Enterprise
  • Sajad Mousavi Hewlett Packard Enterprise
  • Vineet Gundecha Hewlett Packard Enterprise
  • Sahand Ghorbanpour Hewlett Packard Enterprise
  • Avisek Naug Hewlett Packard Enterprise
  • Ricardo Luna Gutiérrez Hewlett Packard Enterprise
  • Antonio Guillen Hewlett Packard Enterprise
  • Desik Rengarajan Hewlett Packard Enterprise Amazon

DOI:

https://doi.org/10.1609/aaai.v39i26.34976

Abstract

We present a Reinforcement Learning Platform for Adversarial Black-box untargeted and targeted attacks, RLAB, that allows users to select from various distortion filters to create adversarial examples. The platform uses a Reinforcement Learning agent to add minimum distortion to input images while still causing misclassification by the target model. The agent uses a novel dual-action method to explore the input image at each step to identify sensitive regions for adding distortions while removing noises that have less impact on the target model. This dual action leads to faster and more efficient convergence of the attack. The platform can also be used to measure the robustness of image classification models against specific distortion types. Also, retraining the model with adversarial samples significantly improved robustness when evaluated on benchmark datasets. The proposed platform outperforms state-of-the-art methods in terms of the average number of queries required to cause misclassification. This advances trustworthiness with a positive social impact.

Published

2025-04-11

How to Cite

Sarkar, S., Ramesh Babu, A., Mousavi, S., Gundecha, V., Ghorbanpour, S., Naug, A., Luna Gutiérrez, R., Guillen, A., & Rengarajan, D. (2025). Reinforcement Learning Platform for Adversarial Black-box Attacks with Custom Distortion Filters. Proceedings of the AAAI Conference on Artificial Intelligence, 39(26), 27628-27635. https://doi.org/10.1609/aaai.v39i26.34976

Issue

Section

AAAI Technical Track on AI Alignment