Automatic Short Answer Grading for Finnish with ChatGPT


  • Li-Hsin Chang University of Turku
  • Filip Ginter University of Turku



Automatic Short Answer Grading (ASAG), Finnish, ChatGPT, Large Language Model (LLM)


Automatic short answer grading (ASAG) seeks to mitigate the burden on teachers by leveraging computational methods to evaluate student-constructed text responses. Large language models (LLMs) have recently gained prominence across diverse applications, with educational contexts being no exception. The sudden rise of ChatGPT has raised expectations that LLMs can handle numerous tasks, including ASAG. This paper aims to shed some light on this expectation by evaluating two LLM-based chatbots, namely ChatGPT built on GPT-3.5 and GPT-4, on scoring short-question answers under zero-shot and one-shot settings. Our data consists of 2000 student answers in Finnish from ten undergraduate courses. Multiple perspectives are taken into account during this assessment, encompassing those of grading system developers, teachers, and students. On our dataset, GPT-4 achieves a good QWK score (0.6+) in 44% of one-shot settings, clearly outperforming GPT-3.5 at 21%. We observe a negative association between student answer length and model performance, as well as a correlation between a smaller standard deviation among a set of predictions and lower performance. We conclude that while GPT-4 exhibits signs of being a capable grader, additional research is essential before considering its deployment as a reliable autograder.



How to Cite

Chang, L.-H., & Ginter, F. (2024). Automatic Short Answer Grading for Finnish with ChatGPT. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23173-23181.