TY - JOUR AU - Dasgupta, Prithviraj AU - Collins, Joseph PY - 2019/06/24 Y2 - 2024/03/29 TI - A Survey of Game Theoretic Approaches for Adversarial Machine Learning in Cybersecurity Tasks JF - AI Magazine JA - AIMag VL - 40 IS - 2 SE - Articles DO - 10.1609/aimag.v40i2.2847 UR - https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/2847 SP - 31-43 AB - <p>Machine learning techniques are used extensively for automating various cybersecurity tasks. Most of these techniques use supervised learning algorithms that rely on training the algorithm to classify incoming data into categories, using data encountered in the relevant domain. A critical vulnerability of these algorithms is that they are susceptible to adversarial attacks by which a malicious entity called an adversary deliberately alters the training data to misguide the learning algorithm into making classification errors. Adversarial attacks could render the learning algorithm unsuitable for use and leave critical systems vulnerable to cybersecurity attacks. This article provides a detailed survey of the stateof-the-art techniques that are used to make a machine learning algorithm robust against adversarial attacks by using the computational framework of game theory. We also discuss open problems and challenges and possible directions for further research that would make deep machine learning–based systems more robust and reliable for cybersecurity tasks.</p> ER -