Evaluation of Large Language Models on Code Obfuscation (Student Abstract)
DOI:
https://doi.org/10.1609/aaai.v38i21.30517Keywords:
Large Language Models, Applications of AI, GPTAbstract
Obfuscation intends to decrease interpretability of code and identification of code behavior. Large Language Models(LLMs) have been proposed for code synthesis and code analysis. This paper attempts to understand how well LLMs can analyse code and identify code behavior. Specifically, this paper systematically evaluates several LLMs’ capabilities to detect obfuscated code and identify behavior across a variety of obfuscation techniques with varying levels of complexity. LLMs proved to be better at detecting obfuscations that changed identifiers, even to misleading ones, compared to obfuscations involving code insertions (unused variables, as well as variables that replace constants with expressions that evaluate to those constants). Hardest to detect were obfuscations that layered multiple simple transformations. For these, only 20-40% of the LLMs’ responses were correct. Adding misleading documentation was also successful in misleading LLMs. We provide all our code to replicate results at https://github.com/SwindleA/LLMCodeObfuscation. Overall, our results suggest a gap in LLMs’ ability to understand code.Downloads
Published
2024-03-24
How to Cite
Swindle, A., McNealy, D., Krishnan, G., & Ramyaa, R. (2024). Evaluation of Large Language Models on Code Obfuscation (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23664-23666. https://doi.org/10.1609/aaai.v38i21.30517
Issue
Section
AAAI Student Abstract and Poster Program