Two Methods for Measuring Question Difficulty and Discrimination in Incomplete Crowdsourced Data

Authors

  • Sarah Luger The University of Edinburgh
  • Jeff Bowles The University of New Mexico, Albuquerque

DOI:

https://doi.org/10.1609/hcomp.v1i1.13129

Keywords:

multiple choice questions, exam-building, matrix-based approaches

Abstract

Assistance in creating high-quality exams would be welcomed by educators who do not have direct access to the proprietary data and methods used by educational testing companies. The current approach for measuring question difficulty relies on models of how good pupils will perform and contrasts that with their lower-performing peers. Inverting this process and allowing educators to test their questions before students answer them will speed up question development and utility. We cover two methods for automatically judging the difficulty and discriminating power of MCQs and how best to build sufficient exams from good questions.

Downloads

Published

2013-11-03

How to Cite

Luger, S., & Bowles, J. (2013). Two Methods for Measuring Question Difficulty and Discrimination in Incomplete Crowdsourced Data. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 1(1), 48-49. https://doi.org/10.1609/hcomp.v1i1.13129