The Effect of Text Length in Crowdsourced Multiple Choice Questions

Authors

  • Sarah Luger University of Edinburgh

DOI:

https://doi.org/10.1609/hcomp.v3i1.13268

Abstract

Automated systems that aid in the development of Multiple Choice Questions (MCQs) have value for both educators, who spend large amounts of time creating novel questions, and students, who spend a great deal of effort both practicing for and taking tests. The current approach for measuring question difficulty in MCQs relies on models of how good pupils will perform and contrasts that with their lower-performing peers. MCQs can be difficult in many ways. This paper looks specifically at the effect of both the number of words in the question stem and in the answer options on question difficulty. This work is based on the hypothesis that questions are more difficult if the stem of the question and the answer options are semantically far apart. This hypothesis can be normalized, in part, with an analysis of the length of texts being compared. The MCQs used in the experiments were voluntarily authored by university students in biology courses. Future work includes additional experiments utilizing other aspects of this extensive crowdsourced data set.

Downloads

Published

2016-03-28

How to Cite

Luger, S. (2016). The Effect of Text Length in Crowdsourced Multiple Choice Questions. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 3(1), 16-19. https://doi.org/10.1609/hcomp.v3i1.13268

Issue

Section

Crowdsourcing Breakthroughs for Language Technology Applications Workshop