Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries
Keywords:Active Learning, Adversarial Learning & Robustness
AbstractIn addition to high accuracy, robustness is becoming increasingly important for machine learning models in various applications. Recently, much research has been devoted to improving the model robustness by training with noise perturbations. Most existing studies assume a fixed perturbation level for all training examples, which however hardly holds in real tasks. In fact, excessive perturbations may destroy the discriminative content of an example, while deficient perturbations may fail to provide helpful information for improving the robustness. Motivated by this observation, we propose to adaptively adjust the perturbation levels for each example in the training process. Specifically, a novel active learning framework is proposed to allow the model interactively querying the correct perturbation level from human experts. By designing a cost-effective sampling strategy along with a new query type, the robustness can be significantly improved with a few queries. Both theoretical analysis and experimental studies validate the effectiveness of the proposed approach.
How to Cite
Ning, K.-P., Tao, L., Chen, S., & Huang, S.-J. (2021). Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 9161-9169. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17106
AAAI Technical Track on Machine Learning III