DeepQR: Neural-Based Quality Ratings for Learnersourced Multiple-Choice Questions
Keywords:Learnersourcing, Question Quality, MCQ, PeerWise, Natural Language Processing, Deep Learning
AbstractAutomated question quality rating (AQQR) aims to evaluate question quality through computational means, thereby addressing emerging challenges in online learnersourced question repositories. Existing methods for AQQR rely solely on explicitly-defined criteria such as readability and word count, while not fully utilising the power of state-of-the-art deep-learning techniques. We propose DeepQR, a novel neural-network model for AQQR that is trained using multiple-choice-question (MCQ) datasets collected from PeerWise, a widely-used learnersourcing platform. Along with designing DeepQR, we investigate models based on explicitly-defined features, or semantic features, or both. We also introduce a self-attention mechanism to capture semantic correlations between MCQ components, and a contrastive-learning approach to acquire question representations using quality ratings. Extensive experiments on datasets collected from eight university-level courses illustrate that DeepQR has superior performance over six comparative models.
How to Cite
Ni, L., Bao, Q., Li, X., Qi, Q., Denny, P., Warren, J., Witbrock, M., & Liu, J. (2022). DeepQR: Neural-Based Quality Ratings for Learnersourced Multiple-Choice Questions. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12826-12834. https://doi.org/10.1609/aaai.v36i11.21562
EAAI Symposium: Full Papers