TY - JOUR AU - Wang, Yu AU - Stokes, Jack AU - Marinescu, Mady PY - 2020/04/03 Y2 - 2024/03/28 TI - Actor Critic Deep Reinforcement Learning for Neural Malware Control JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 34 IS - 01 SE - AAAI Technical Track: Applications DO - 10.1609/aaai.v34i01.5449 UR - https://ojs.aaai.org/index.php/AAAI/article/view/5449 SP - 1005-1012 AB - <p>In addition to using signatures, antimalware products also detect malicious attacks by evaluating unknown files in an emulated environment, <em>i.e.</em> sandbox, prior to execution on a computer's native operating system. During emulation, a file cannot be scanned indefinitely, and antimalware engines often set the number of instructions to be executed based on a set of heuristics. These heuristics only make the decision of when to halt emulation using partial information leading to the execution of the file for either too many or too few instructions. Also this method is vulnerable if the attackers learn this set of heuristics.</p><p>Recent research uses a deep reinforcement learning (DRL) model employing a Deep Q-Network (DQN) to learn when to halt the emulation of a file. In this paper, we propose a new DRL-based system which instead employs a modified actor critic (AC) framework for the emulation halting task. This AC model dynamically predicts the best time to halt the file's execution based on a sequence of system API calls. Compared to the earlier models, the new model is capable of handling adversarial attacks by simulating their behaviors using the critic model. The new AC model demonstrates much better performance than both the DQN model and antimalware engine's heuristics. In terms of execution speed (evaluated by the halting decision), the new model halts the execution of unknown files by up to 2.5% earlier than the DQN model and 93.6% earlier than the heuristics. For the task of detecting malicious files, the proposed AC model increases the true positive rate by 9.9% from 69.5% to 76.4% at a false positive rate of 1% compared to the DQN model, and by 83.4% from 41.2% to 76.4% at a false positive rate of 1% compared to a recently proposed LSTM model.</p> ER -