LR-XFL: Logical Reasoning-Based Explainable Federated Learning

Authors

  • Yanci Zhang Nanyang Technological University
  • Han Yu Nanyang Technological University

DOI:

https://doi.org/10.1609/aaai.v38i19.30179

Keywords:

General

Abstract

Federated learning (FL) is an emerging approach for training machine learning models collaboratively while preserving data privacy. The need for privacy protection makes it difficult for FL models to achieve global transparency and explainability. To address this limitation, we incorporate logic-based explanations into FL by proposing the Logical Reasoning-based eXplainable Federated Learning (LR-XFL) approach. Under LR-XFL, FL clients create local logic rules based on their local data and send them, along with model updates, to the FL server. The FL server connects the local logic rules through a proper logical connector that is derived based on properties of client data, without requiring access to the raw data. In addition, the server also aggregates the local model updates with weight values determined by the quality of the clients’ local data as reflected by their uploaded logic rules. The results show that LR-XFL outperforms the most relevant baseline by 1.19%, 5.81% and 5.41% in terms of classification accuracy, rule accuracy and rule fidelity, respectively. The explicit rule evaluation and expression under LR-XFL enable human experts to validate and correct the rules on the server side, hence improving the global FL model’s robustness to errors. It has the potential to enhance the transparency of FL models for areas like healthcare and finance where both data privacy and explainability are important.

Downloads

Published

2024-03-24

How to Cite

Zhang, Y., & Yu, H. (2024). LR-XFL: Logical Reasoning-Based Explainable Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21788-21796. https://doi.org/10.1609/aaai.v38i19.30179

Issue

Section

AAAI Technical Track on Safe, Robust and Responsible AI Track