Unraveling the Influence of Training Data and Internal Structures in Large Language Models for Enhanced Explainability (Student Abstract)
DOI:
https://doi.org/10.1609/aaai.v39i28.35268Abstract
Recent advances in deep learning have expanded the application of large language models (LLMs) across fields such as medicine, finance, and education. Understanding the mechanisms underlying these models is essential to mitigate issues like hallucinations and bias. This study provides deep learning practitioners with insights into how specific training data points and internal structures influence model behaviour. Using influence functions and mechanistic interpretability, we will analyze the impact of data on model predictions across various tasks. Preliminary findings indicate that semantic search techniques, such as FAISS, enable efficient identification of influential training points in GPT-2 small. Future work will extend these methods to additional tasks and more complex models, with a focus on further elucidating LLM structures to improve interpretability.Downloads
Published
2025-04-11
How to Cite
Li, L., & Sen, P. (2025). Unraveling the Influence of Training Data and Internal Structures in Large Language Models for Enhanced Explainability (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 39(28), 29407-29409. https://doi.org/10.1609/aaai.v39i28.35268
Issue
Section
AAAI Student Abstract and Poster Program