AI Risk Profiles: A Standards Proposal for Pre-deployment AI Risk Disclosures

Authors

  • Eli Sherman Credo AI
  • Ian Eisenberg Credo AI

DOI:

https://doi.org/10.1609/aaai.v38i21.30348

Keywords:

Assurance, Generative AI, Track: AI Incidents and Best Practices (paper), Education and Training

Abstract

As AI systems’ sophistication and proliferation have increased, awareness of the risks has grown proportionally. The AI industry is increasingly emphasizing the need for transparency, with proposals ranging from standardizing use of technical disclosures, like model cards, to regulatory licensing regimes. Since the AI value chain is complicated, with actors bringing varied expertise, perspectives, and values, it is crucial that consumers of transparency disclosures be able to understand the risks of the AI system in question. In this paper we propose a risk profiling standard which can guide downstream decision-making, including triaging further risk assessment, informing procurement and deployment, and directing regulatory frameworks. The standard is built on our proposed taxonomy of AI risks, which distills the wide variety of risks proposed in the literature into a high-level categorization. We outline the myriad data sources needed to construct informative Risk Profiles and propose a template and methodology for collating risk information into a standard, yet flexible, structure. We apply this methodology to a number of prominent AI systems using publicly available information. To conclude, we discuss design decisions for the profiles and future work.

Published

2024-03-24

How to Cite

Sherman, E., & Eisenberg, I. (2024). AI Risk Profiles: A Standards Proposal for Pre-deployment AI Risk Disclosures. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23047-23052. https://doi.org/10.1609/aaai.v38i21.30348