MP: Endowing Large Language Models with Lateral Thinking

Authors

  • Tian Bai College of Computer Science and Technology, Jilin University
  • Yongwang Cao College of Computer Science and Technology, Jilin University
  • Yan Ge Graduate School of Comprehensive Human Sciences, University of Tsukuba
  • Haitao Yu Institute of Library, Information and Media Science, University of Tsukuba

DOI:

https://doi.org/10.1609/aaai.v39i22.34514

Abstract

The recent studies show that Large Language Models (LLMs) often fall short in tasks demanding creative, lateral thinking due to lacking a clear awareness of their own reasoning processes. To cope with this issue, we propose a novel metacognitive prompting method (titled as MP) by mimicking human metacognition. Through integrating metacognitive principles, MP endows LLMs with lateral thinking ability, thereby enhancing their abilities to strategize, monitor, and reflect on their responses when dealing with creative tasks. The experimental results with five base LLMs across three lateral thinking datasets demonstrate that: All LLMs armed with MP consistently outperform the representative baseline methods. For example, MP demonstrates superior performance over CoT prompting across Sentence Puzzle (+5.00%), Word Puzzle (+10.07%), BiRdQA (+6.48%), and RiddleSense (+2.65%) with GPT-3.5-turbo model. In particular, the deployment of MP with GPT-4 achieves significant performance improvements that even surpass human performance on BRAINTEASER benchmark, demonstrating the transformative potential of MP in enhancing the creative problem-solving abilities of LLMs.

Downloads

Published

2025-04-11

How to Cite

Bai, T., Cao, Y., Ge, Y., & Yu, H. (2025). MP: Endowing Large Language Models with Lateral Thinking. Proceedings of the AAAI Conference on Artificial Intelligence, 39(22), 23460-23468. https://doi.org/10.1609/aaai.v39i22.34514

Issue

Section

AAAI Technical Track on Natural Language Processing I