MathAttack: Attacking Large Language Models towards Math Solving Ability

Authors

  • Zihao Zhou Xi'an Jiaotong-Liverpool University
  • Qiufeng Wang Xi'an Jiaotong-Liverpool University
  • Mingyu Jin Northwestern University
  • Jie Yao Xi'an Jiaotong-Liverpool University
  • Jianan Ye Xi'an Jiaotong-Liverpool University
  • Wei Liu ShanghaiTech University
  • Wei Wang Xi'an Jiaotong-Liverpool University
  • Xiaowei Huang University of Liverpool
  • Kaizhu Huang Duke Kunshan University

DOI:

https://doi.org/10.1609/aaai.v38i17.29949

Keywords:

NLP: (Large) Language Models, NLP: Applications

Abstract

With the boom of Large Language Models (LLMs), the research of solving Math Word Problem (MWP) has recently made great progress. However, there are few studies to examine the robustness of LLMs in math solving ability. Instead of attacking prompts in the use of LLMs, we propose a MathAttack model to attack MWP samples which are closer to the essence of robustness in solving math problems. Compared to traditional text adversarial attack, it is essential to preserve the mathematical logic of original MWPs during the attacking. To this end, we propose logical entity recognition to identify logical entries which are then frozen. Subsequently, the remaining text are attacked by adopting a word-level attacker. Furthermore, we propose a new dataset RobustMath to evaluate the robustness of LLMs in math solving ability. Extensive experiments on our RobustMath and two another math benchmark datasets GSM8K and MultiAirth show that MathAttack could effectively attack the math solving ability of LLMs. In the experiments, we observe that (1) Our adversarial samples from higher-accuracy LLMs are also effective for attacking LLMs with lower accuracy (e.g., transfer from larger to smaller-size LLMs, or from few-shot to zero-shot prompts); (2) Complex MWPs (such as more solving steps, longer text, more numbers) are more vulnerable to attack; (3) We can improve the robustness of LLMs by using our adversarial samples in few-shot prompts. Finally, we hope our practice and observation can serve as an important attempt towards enhancing the robustness of LLMs in math solving ability. The code and dataset is available at: https://github.com/zhouzihao501/MathAttack.

Published

2024-03-24

How to Cite

Zhou, Z., Wang, Q., Jin, M., Yao, J., Ye, J., Liu, W., Wang, W., Huang, X., & Huang, K. (2024). MathAttack: Attacking Large Language Models towards Math Solving Ability. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19750-19758. https://doi.org/10.1609/aaai.v38i17.29949

Issue

Section

AAAI Technical Track on Natural Language Processing II