How to Protect Copyright Data in Optimization of Large Language Models?

Authors

  • Timothy Chu Google
  • Zhao Song Adobe Research
  • Chiwun Yang Sun Yat-sen University

DOI:

https://doi.org/10.1609/aaai.v38i16.29741

Keywords:

NLP: Ethics -- Bias, Fairness, Transparency & Privacy, ML: Optimization

Abstract

Large language models (LLMs) and generative AI have played a transformative role in computer research and applications. Controversy has arisen as to whether these models output copyrighted data, which can occur if the data the models are trained on is copyrighted. LLMs are built on the transformer neural network architecture, which in turn relies on a mathematical computation called Attention that uses the softmax function. In this paper, we observe that large language model training and optimization can be seen as a softmax regression problem. We then establish a method of efficiently performing softmax regression, in a way that prevents the regression function from generating copyright data. This establishes a theoretical method of training large language models in a way that avoids generating copyright data.

Published

2024-03-24

How to Cite

Chu, T., Song, Z., & Yang, C. (2024). How to Protect Copyright Data in Optimization of Large Language Models?. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17871-17879. https://doi.org/10.1609/aaai.v38i16.29741

Issue

Section

AAAI Technical Track on Natural Language Processing I