PatchNAS: Repairing DNNs in Deployment with Patched Network Architecture Search
DOI:
https://doi.org/10.1609/aaai.v37i12.26730Keywords:
GeneralAbstract
Despite being widely deployed in safety-critical applications such as autonomous driving and health care, deep neural networks (DNNs) still suffer from non-negligible reliability issues. Numerous works had reported that DNNs were vulnerable to either natural environmental noises or man-made adversarial noises. How to repair DNNs in deployment with noisy samples is a crucial topic for the robustness of neural networks. While many network repairing methods based on data argumentation and weight adjustment have been proposed, they require retraining and redeploying the whole model, which causes high overhead and is infeasible for varying faulty cases on different deployment environments. In this paper, we propose a novel network repairing framework called PatchNAS from the architecture perspective, where we freeze the pretrained DNNs and introduce a small patch network to deal with failure samples at runtime. PatchNAS introduces a novel network instrumentation method to determine the faulty stage of the network structure given the collected failure samples. A small patch network structure is searched unsupervisedly using neural architecture search (NAS) technique with data samples from deployment environment. The patch network repairs the DNNs by correcting the output feature maps of the faulty stage, which helps to maintain network performance on normal samples and enhance robustness in noisy environments. Extensive experiments based on several DNNs across 15 types of natural noises show that the proposed PatchNAS outperforms the state-of-the-arts with significant performance improvement as well as much lower deployment overhead.Downloads
Published
2023-06-26
How to Cite
Fang, Y., Li, W., Zeng, Y., Zheng, Y., Hu, Z., & Lu, S. (2023). PatchNAS: Repairing DNNs in Deployment with Patched Network Architecture Search. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14811-14819. https://doi.org/10.1609/aaai.v37i12.26730
Issue
Section
AAAI Special Track on Safe and Robust AI