Activations as Features: Probing LLMs for Generalizable Essay Scoring Representations

Authors

  • Jinwei Chi Jinan University
  • Ke Wang Jinan University
  • Yu Chen Jinan University
  • Xuanye Lin South China University of Technology
  • Qiang Xu Jinan University

DOI:

https://doi.org/10.1609/aaai.v40i36.40292

Abstract

Automated essay scoring (AES) is a challenging task in cross-prompt settings due to the diversity of scoring criteria. While previous studies have focused on the output of large language models (LLMs) to improve scoring accuracy, we believe activations from intermediate layers may also provide valuable information. To explore this possibility, we evaluated the discriminative power of LLMs’ activations in cross-prompt essay scoring task. Specifically, we used activation to fit probes and further analyzed the effects of different models and input content of LLMs on this discriminative power. By computing the directions of essays across various trait dimensions under different prompts, we analyzed the variation in evaluation perspectives of large language models concerning essay types and traits. Results show that the activations possess strong discriminative power in evaluating essay quality and that LLMs can adapt their evaluation perspectives to different traits and essay types, effectively handling the diversity of scoring criteria in cross-prompt settings.

Downloads

Published

2026-03-14

How to Cite

Chi, J., Wang, K., Chen, Y., Lin, X., & Xu, Q. (2026). Activations as Features: Probing LLMs for Generalizable Essay Scoring Representations. Proceedings of the AAAI Conference on Artificial Intelligence, 40(36), 30395-30403. https://doi.org/10.1609/aaai.v40i36.40292

Issue

Section

AAAI Technical Track on Natural Language Processing I