On the Effectiveness of Curriculum Learning in Educational Text Scoring
DOI:
https://doi.org/10.1609/aaai.v37i12.26707Keywords:
GeneralAbstract
Automatic Text Scoring (ATS) is a widely-investigated task in education. Existing approaches often stressed the structure design of an ATS model and neglected the training process of the model. Considering the difficult nature of this task, we argued that the performance of an ATS model could be potentially boosted by carefully selecting data of varying complexities in the training process. Therefore, we aimed to investigate the effectiveness of curriculum learning (CL) in scoring educational text. Specifically, we designed two types of difficulty measurers: (i) pre-defined, calculated by measuring a sample's readability, length, the number of grammatical errors or unique words it contains; and (ii) automatic, calculated based on whether a model in a training epoch can accurately score the samples. These measurers were tested in both the easy-to-hard to hard-to-easy training paradigms. Through extensive evaluations on two widely-used datasets (one for short answer scoring and the other for long essay scoring), we demonstrated that (a) CL indeed could boost the performance of state-of-the-art ATS models, and the maximum improvement could be up to 4.5%, but most improvements were achieved when assessing short and easy answers; (b) the pre-defined measurer calculated based on the number of grammatical errors contained in a text sample tended to outperform the other difficulty measurers across different training paradigms.Downloads
Published
2023-06-26
How to Cite
Zeng, Z., Gasevic, D., & Chen, G. (2023). On the Effectiveness of Curriculum Learning in Educational Text Scoring. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14602-14610. https://doi.org/10.1609/aaai.v37i12.26707
Issue
Section
AAAI Special Track on AI for Social Impact