Towards One Shot Search Space Poisoning in Neural Architecture Search (Student Abstract)
DOI:
https://doi.org/10.1609/aaai.v36i11.21658Keywords:
Deep Learning, Adversarial Machine Learning, Automated Machine Learning, Poisoning Attacks, Neural Architecture Search, Neural NetworksAbstract
We evaluate the robustness of a Neural Architecture Search (NAS) algorithm known as Efficient NAS (ENAS) against data agnostic poisoning attacks on the original search space with carefully designed ineffective operations. We empirically demonstrate how our one shot search space poisoning approach exploits design flaws in the ENAS controller to degrade predictive performance on classification tasks. With just two poisoning operations injected into the search space, we inflate prediction error rates for child networks upto 90% on the CIFAR-10 dataset.Downloads
Published
2022-06-28
How to Cite
Saxena, N., Wu, R., & Jain, R. (2022). Towards One Shot Search Space Poisoning in Neural Architecture Search (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 13043-13044. https://doi.org/10.1609/aaai.v36i11.21658
Issue
Section
AAAI Student Abstract and Poster Program