DeepQR: Neural-Based Quality Ratings for Learnersourced Multiple-Choice Questions


  • Lin Ni University of Auckland
  • Qiming Bao University of Auckland
  • Xiaoxuan Li University of Auckland
  • Qianqian Qi University of Auckland
  • Paul Denny University of Auckland
  • Jim Warren University of Auckland
  • Michael Witbrock University of Auckland
  • Jiamou Liu University of Auckland



Learnersourcing, Question Quality, MCQ, PeerWise, Natural Language Processing, Deep Learning


Automated question quality rating (AQQR) aims to evaluate question quality through computational means, thereby addressing emerging challenges in online learnersourced question repositories. Existing methods for AQQR rely solely on explicitly-defined criteria such as readability and word count, while not fully utilising the power of state-of-the-art deep-learning techniques. We propose DeepQR, a novel neural-network model for AQQR that is trained using multiple-choice-question (MCQ) datasets collected from PeerWise, a widely-used learnersourcing platform. Along with designing DeepQR, we investigate models based on explicitly-defined features, or semantic features, or both. We also introduce a self-attention mechanism to capture semantic correlations between MCQ components, and a contrastive-learning approach to acquire question representations using quality ratings. Extensive experiments on datasets collected from eight university-level courses illustrate that DeepQR has superior performance over six comparative models.




How to Cite

Ni, L., Bao, Q., Li, X., Qi, Q., Denny, P., Warren, J., Witbrock, M., & Liu, J. (2022). DeepQR: Neural-Based Quality Ratings for Learnersourced Multiple-Choice Questions. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12826-12834.