Scalable and Trustworthy Learning in Heterogeneous Networks

Authors

  • Tian Li University of Chicago

DOI:

https://doi.org/10.1609/aaai.v39i27.35110

Abstract

To build a responsible data economy and protect data ownerhip, it is crucial to enable learning models from separate, heterogeneous data sources without centralization. For example, federated learning (FL) aims to train models across massive remote devices or isolated organizations, while keeping user data local. However, federated learning can face critical practical issues such as scalability, noisy samples, biased learning systems or procedures, and privacy leakage. At the intersection between optimization, trustworthy (fair, robust, and private) ML, and learning in heterogeneous environments, my research aims to support scalable and responsible data sharing to collectively build intelligent models.

Downloads

Published

2025-04-11

How to Cite

Li, T. (2025). Scalable and Trustworthy Learning in Heterogeneous Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 39(27), 28715-28715. https://doi.org/10.1609/aaai.v39i27.35110