Explaining Large Language Model-Based Neural Semantic Parsers (Student Abstract)
DOI:
https://doi.org/10.1609/aaai.v37i13.27014Keywords:
Explanation, Large Language Model, Semantic ParsingAbstract
While large language models (LLMs) have demonstrated strong capability in structured prediction tasks such as semantic parsing, few amounts of research have explored the underlying mechanisms of their success. Our work studies different methods for explaining an LLM-based semantic parser and qualitatively discusses the explained model behaviors, hoping to inspire future research toward better understanding them.Downloads
Published
2023-09-06
How to Cite
Rai, D., Zhou, Y., Wang, B., & Yao, Z. (2023). Explaining Large Language Model-Based Neural Semantic Parsers (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16308-16309. https://doi.org/10.1609/aaai.v37i13.27014
Issue
Section
AAAI Student Abstract and Poster Program