An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence

Authors

  • Daniel Carpenter Harvard University
  • Carson Ezell Harvard University

DOI:

https://doi.org/10.1609/aies.v7i1.31633

Abstract

Observers and practitioners of artificial intelligence (AI) have proposed an FDA-style licensing regime for the most advanced AI models, or 'frontier' models. In this paper, we explore the applicability of approval regulation -- that is, regulation of a product that combines experimental minima with government licensure conditioned partially or fully upon that experimentation -- to the regulation of frontier AI. There are a number of reasons to believe that approval regulation, simplistically applied, would be inapposite for frontier AI risks. Domains of weak fit include the difficulty of defining the regulated product, the presence of Knightian uncertainty or deep ambiguity about harms from AI, the potentially transmissible nature of risks, and distributed activities among actors involved in the AI lifecycle. We conclude by highlighting the role of policy learning and experimentation in regulatory development, describing how learning from other forms of AI regulation and improvements in evaluation and testing methods can help to overcome some of the challenges we identify.

Downloads

Published

2024-10-16

How to Cite

Carpenter, D., & Ezell, C. (2024). An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 239-254. https://doi.org/10.1609/aies.v7i1.31633