Automated Cross-prompt Scoring of Essay Traits

Authors

  • Robert Ridley Nanjing University
  • Liang He Nanjing University
  • Xin-yu Dai Nanjing University
  • Shujian Huang Nanjing University
  • Jiajun Chen Nanjing University

DOI:

https://doi.org/10.1609/aaai.v35i15.17620

Keywords:

Psycholinguistics and Language Learning

Abstract

The majority of current research in Automated Essay Scoring (AES) focuses on prompt-specific scoring of either the overall quality of an essay or the quality with regards to certain traits. In real-world applications obtaining labelled data for a target essay prompt is often expensive or unfeasible, requiring the AES system to be able to perform well when predicting scores for essays from unseen prompts. As a result, some recent research has been dedicated to cross-prompt AES. However, this line of research has thus far only been concerned with holistic, overall scoring, with no exploration into the scoring of different traits. As users of AES systems often require feedback with regards to different aspects of their writing, trait scoring is a necessary component of an effective AES system. Therefore, to address this need, we introduce a new task named Automated Cross-prompt Scoring of Essay Traits, which requires the model to be trained solely on non-target-prompt essays and to predict the holistic, overall score as well as scores for a number of specific traits for target-prompt essays. This task challenges the model's ability to generalize in order to score essays from a novel domain as well as its ability to represent the quality of essays from multiple different aspects. In addition, we introduce a new, innovative approach which builds on top of a state-of-the-art method for cross-prompt AES. Our method utilizes a trait-attention mechanism and a multi-task architecture that leverages the relationships between each trait to simultaneously predict the overall score and the score of each individual trait. We conduct extensive experiments on the widely used ASAP and ASAP++ datasets and demonstrate that our approach is able to outperform leading prompt-specific trait scoring and cross-prompt AES methods.

Downloads

Published

2021-05-18

How to Cite

Ridley, R., He, L., Dai, X.- yu, Huang, S., & Chen, J. (2021). Automated Cross-prompt Scoring of Essay Traits. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13745-13753. https://doi.org/10.1609/aaai.v35i15.17620

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II