Responsible Reporting for Frontier AI Development

Authors

  • Noam Kolt University of Toronto
  • Markus Anderljung Centre for the Governance of AI
  • Joslyn Barnhart Google DeepMind
  • Asher Brass Institute for AI Policy and Strategy
  • Kevin Esvelt Massachusetts Institute of Technology
  • Gillian K. Hadfield University of Toronto Vector Institute for AI
  • Lennart Heim Centre for the Governance of AI
  • Mikel Rodriguez Google DeepMind
  • Jonas B. Sandbrink University of Oxford
  • Thomas Woodside Center for Security and Emerging Technology

DOI:

https://doi.org/10.1609/aies.v7i1.31678

Abstract

Mitigating the risks from frontier AI systems requires up-to-date and reliable information about those systems. Organizations that develop and deploy frontier systems have significant access to such information. By reporting safety-critical information to actors in government, industry, and civil society, these organizations could improve visibility into new and emerging risks posed by frontier systems. Equipped with this information, developers could make better informed decisions on risk management, while policymakers could design more targeted and robust regulatory infrastructure. We outline the key features of responsible reporting and propose mechanisms for implementing them in practice.

Downloads

Published

2024-10-16

How to Cite

Kolt, N., Anderljung, M., Barnhart, J., Brass, A., Esvelt, K., Hadfield, G. K., Heim, L., Rodriguez, M., Sandbrink, J. B., & Woodside, T. (2024). Responsible Reporting for Frontier AI Development. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 768-783. https://doi.org/10.1609/aies.v7i1.31678