Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models

Authors

  • Qiming Bao Strong AI Lab, NAOInstitute, Waipapa Taumata Rau - The University of Auckland Xtracta, New Zealand
  • Juho Leinonen School of Computer Science, University of Auckland
  • Alex Yuxuan Peng Strong AI Lab, NAOInstitute, Waipapa Taumata Rau - The University of Auckland
  • Wanjun Zhong Sun Yat-sen University
  • Gaël Gendron Strong AI Lab, NAOInstitute, Waipapa Taumata Rau - The University of Auckland
  • Timothy Pistotti Strong AI Lab, NAOInstitute, Waipapa Taumata Rau - The University of Auckland
  • Alice Huang School of Life and Environmental Sciences, The University of Sydney
  • Paul Denny School of Computer Science, University of Auckland
  • Michael Witbrock Strong AI Lab, NAOInstitute, Waipapa Taumata Rau - The University of Auckland
  • Jiamou Liu Strong AI Lab, NAOInstitute, Waipapa Taumata Rau - The University of Auckland

DOI:

https://doi.org/10.1609/aaai.v39i28.35164

Abstract

Large language models (LLMs) have demonstrated strong capabilities in language understanding and generation, and their potential in educational contexts is increasingly being explored. One promising area is learnersourcing, where students engage in creating their own educational content, such as multiple-choice questions. A critical step in this process is generating effective explanations for the solutions to these questions, as such explanations aid in peer understanding and promote deeper conceptual learning. However, students often find it difficult to craft high-quality explanations due to limited understanding or gaps in their subject knowledge. To support this task, we introduce ``ILearner-LLM,'' a framework that uses iterative enhancement with LLMs to improve generated explanations. The framework combines an explanation generation model and an explanation evaluation model fine-tuned using student preferences for quality, where feedback from the evaluation model is fed back into the generation model to refine the output. Our experiments with LLaMA2-13B and GPT-4 using five large datasets from the PeerWise MCQ platform show that ILearner-LLM produces explanations of higher quality that closely align with those written by students. Our findings represent a promising approach for enriching the learnersourcing experience for students and for leveraging the capabilities of large language models for educational applications.

Published

2025-04-11

How to Cite

Bao, Q., Leinonen, J., Peng, A. Y., Zhong, W., Gendron, G., Pistotti, T., … Liu, J. (2025). Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(28), 28955–28963. https://doi.org/10.1609/aaai.v39i28.35164