Dropout Model Evaluation in MOOCs

Authors

  • Joshua Gardner The University of Michigan - Ann Arbor
  • Christopher Brooks The University of Michigan - Ann Arbor

DOI:

https://doi.org/10.1609/aaai.v32i1.11392

Keywords:

MOOCs, Model Evaluation, Applied Statistics, Predictive Modeling

Abstract

The field of learning analytics needs to adopt a more rigorous approach for predictive model evaluation that matches the complex practice of model-building. In this work, we present a procedure to statistically test hypotheses about model performance which goes beyond the state-of-the-practice in the community to analyze both algorithms and feature extraction methods from raw data. We apply this method to a series of algorithms and feature sets derived from a large sample of Massive Open Online Courses (MOOCs). While a complete comparison of all potential modeling approaches is beyond the scope of this paper, we show that this approach reveals a large gap in dropout prediction performance between forum-, assignment-, and clickstream-based feature extraction methods, where the latter is significantly better than the former two, which are in turn indistinguishable from one another. This work has methodological implications for evaluating predictive or AI-based models of student success, and practical implications for the design and targeting of at-risk student models and interventions.

Downloads

Published

2018-04-27

How to Cite

Gardner, J., & Brooks, C. (2018). Dropout Model Evaluation in MOOCs. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11392