DSGram: Dynamic Weighting Sub-Metrics for Grammatical Error Correction in the Era of Large Language Models
DOI:
https://doi.org/10.1609/aaai.v39i24.34746Abstract
Evaluating the performance of Grammatical Error Correction (GEC) models has become increasingly challenging, as large language model (LLM)-based GEC systems often produce corrections that diverge from provided gold references. This discrepancy undermines the reliability of traditional reference-based evaluation metrics. In this study, we propose a novel evaluation framework for GEC models, DSGram, integrating Semantic Coherence, Edit Level, and Fluency, and utilizing a dynamic weighting mechanism. Our framework employs the Analytic Hierarchy Process (AHP) in conjunction with large language models to ascertain the relative importance of various evaluation criteria. Additionally, we develop a dataset incorporating human annotations and LLM-simulated sentences to validate our algorithms and fine-tune more cost-effective models. Experimental results indicate that our proposed approach enhances the effectiveness of GEC model evaluations.Downloads
Published
2025-04-11
How to Cite
Xie, J., Li, Y., Yin, X., & Wan, X. (2025). DSGram: Dynamic Weighting Sub-Metrics for Grammatical Error Correction in the Era of Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(24), 25561–25569. https://doi.org/10.1609/aaai.v39i24.34746
Issue
Section
AAAI Technical Track on Natural Language Processing III