Large Language Models Enhanced Personalized Graph Neural Architecture Search in Federated Learning

Authors

  • Hui Fang Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, College of Computer Science, Zhejiang University, China
  • Yang Gao Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, College of Computer Science, Zhejiang University, China
  • Peng Zhang Cyberspace Institute of Advanced Technology, Guangzhou University, China
  • Jiangchao Yao Cooperative Medianet Innovation Center, Shanghai Jiaotong University, China
  • Hongyang Chen Research Center for Data Hub and Security, Zhejiang Lab, China
  • Haishuai Wang Zhejiang Key Laboratory of Accessible Perception and Intelligent Systems, College of Computer Science, Zhejiang University, China

DOI:

https://doi.org/10.1609/aaai.v39i16.33814

Abstract

Personalized federated learning (PFL) on graphs is an emerging field focusing on the collaborative development of architectures across multiple clients, each with distinct graph data distributions while adhering to strict privacy standards. This area often requires extensive expert intervention in model design, which is a significant limitation. Recent advancements have aimed to automate the search for graph neural network architectures, incorporating large language models (LLMs) for their advanced reasoning and self-reflection capabilities. However, two technical challenges persist. First, although LLMs are effective in natural language processing, their ability to meet the complex demands of graph neural architecture search (GNAS) is still being explored. Second, while LLMs can guide the architecture search process, they do not directly solve the issue of client drift due to heterogeneous data distributions. To address these challenges, we introduce a novel method, Personalized Federated Graph Neural Architecture Search (PFGNAS). This approach employs a task-specific prompt to identify and integrate optimal GNN architectures continuously. To counteract client drift, PFGNAS utilizes a weight-sharing strategy of supernet, which optimizes the local architectures while ensuring client-specific personalization. Extensive evaluations show that PFGNAS significantly outperforms traditional PFL methods, highlighting the advantages of integrating LLMs into personalized federated learning environments.

Downloads

Published

2025-04-11

How to Cite

Fang, H., Gao, Y., Zhang, P., Yao, J., Chen, H., & Wang, H. (2025). Large Language Models Enhanced Personalized Graph Neural Architecture Search in Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(16), 16514–16522. https://doi.org/10.1609/aaai.v39i16.33814

Issue

Section

AAAI Technical Track on Machine Learning II