Personalized LoRA for Human-Centered Text Understanding

Authors

  • You Zhang Yunnan University
  • Jin Wang Yunnan University
  • Liang-Chih Yu Yuan Ze University
  • Dan Xu Yunnan University
  • Xuejie Zhang Yunnan University

DOI:

https://doi.org/10.1609/aaai.v38i17.29931

Keywords:

NLP: Sentiment Analysis, Stylistic Analysis, and Argument Mining

Abstract

Effectively and efficiently adapting a pre-trained language model (PLM) for human-centered text understanding (HCTU) is challenging since user tokens are million-level in most personalized applications and do not have concrete explicit semantics. A standard and parameter-efficient approach (e.g., LoRA) necessitates memorizing numerous suits of adapters for each user. In this work, we introduce a personalized LoRA (PLoRA) with a plug-and-play (PnP) framework for the HCTU task. PLoRA is effective, parameter-efficient, and dynamically deploying in PLMs. Moreover, a personalized dropout and a mutual information maximizing strategies are adopted and hence the proposed PLoRA can be well adapted to few/zero-shot learning scenarios for the cold-start issue. Experiments conducted on four benchmark datasets show that the proposed method outperforms existing methods in full/few/zero-shot learning scenarios for the HCTU task, even though it has fewer trainable parameters. For reproducibility, the code for this paper is available at: https://github.com/yoyo-yun/PLoRA.

Published

2024-03-24

How to Cite

Zhang, Y., Wang, J., Yu, L.-C., Xu, D., & Zhang, X. (2024). Personalized LoRA for Human-Centered Text Understanding. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19588-19596. https://doi.org/10.1609/aaai.v38i17.29931

Issue

Section

AAAI Technical Track on Natural Language Processing II