Tracing the Evolution of Information Transparency for OpenAI’s GPT Models through a Biographical Approach
DOI:
https://doi.org/10.1609/aies.v7i1.31757Abstract
Information transparency, the open disclosure of information about models, is crucial for proactively evaluating the potential societal harm of large language models (LLMs) and developing effective risk mitigation measures. Adapting the biographies of artifacts and practices (BOAP) method from science and technology studies, this study analyzes the evolution of information transparency within OpenAI’s Generative Pre-trained Transformers (GPT) model reports and usage policies from its inception in 2018 to GPT-4, one of today’s most capable LLMs. To assess the breadth and depth of transparency practices, we develop a 9-dimensional, 3-level analytical framework to evaluate the comprehensiveness and accessibility of information disclosed to various stakeholders. Findings suggest that while model limitations and downstream usages are increasingly clarified, model development processes have become more opaque. Transparency remains minimal in certain aspects, such as model explainability and real-world evidence of LLM impacts, and the discussions on safety measures such as technical interventions and regulation pipelines lack in-depth details. The findings emphasize the need for enhanced transparency to foster accountability and ensure responsible technological innovations.Downloads
Published
2024-10-16
How to Cite
Xu, Z., & Mustafaraj, E. (2024). Tracing the Evolution of Information Transparency for OpenAI’s GPT Models through a Biographical Approach. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1684-1695. https://doi.org/10.1609/aies.v7i1.31757
Issue
Section
Full Archival Papers