LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins

Authors

  • Umar Iqbal Washington University in St. Louis
  • Tadayoshi Kohno University of Washington
  • Franziska Roesner University of Washington

DOI:

https://doi.org/10.1609/aies.v7i1.31664

Abstract

Large language model (LLM) platforms, such as ChatGPT, have recently begun offering an app ecosystem to interface with third-party services on the internet. While these apps extend the capabilities of LLM platforms, they are developed by arbitrary third parties and thus cannot be implicitly trusted. Apps also interface with LLM platforms and users using natural language, which can have imprecise interpretations. In this paper, we propose a framework that lays a foundation for LLM platform designers to analyze and improve the security, privacy, and safety of current and future third-party integrated LLM platforms. Our framework is a formulation of an attack taxonomy that is developed by iteratively exploring how LLM platform stakeholders could leverage their capabilities and responsibilities to mount attacks against each other. As part of our iterative process, we apply our framework in the context of OpenAI's plugin (apps) ecosystem. We uncover plugins that concretely demonstrate the potential for the types of issues that we outline in our attack taxonomy. We conclude by discussing novel challenges and by providing recommendations to improve the security, privacy, and safety of present and future LLM-based computing platforms. The full version of this paper is available online at https://arxiv.org/abs/2309.10254

Downloads

Published

2024-10-16

How to Cite

Iqbal, U., Kohno, T., & Roesner, F. (2024). LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 611-623. https://doi.org/10.1609/aies.v7i1.31664