Towards Reliable Learning in the Wild: Generalization and Adaptation

Authors

  • Huaxiu Yao University of North Carolina at Chapel Hill

DOI:

https://doi.org/10.1609/aaai.v38i20.30299

Keywords:

Reliable Machine Learning, Distribution Shift, Generalization

Abstract

The real-world deployment of machine learning algorithms often poses challenges due to shifts in data distributions and tasks. These shifts can lead to a degradation in model performance, as the model may not have encountered such changes during training. Additionally, they can make it difficult for the model to generalize to new scenarios and can result in poor performance in real-world applications. In this talk, I will present our research on building machine learning models that are highly generalizable and easily adaptable to different shifts. Specifically, I will first discuss our approach to improving out-of-distribution robustness and mitigating spurious correlations by training environment-invariant models through selective augmentation and post-hoc rectification. Second, I will present our techniques for continuous and rapid adaptation of models to new tasks and environments. This includes methods to facilitate compositional generalization and adaptation by extracting relationships from historical observations and to enhance reliable adaptation even in the face of imperfect observations. Additionally, I will showcase our successful practices for addressing shifts in real-world applications, such as in the healthcare, e-commerce, and transportation industries. The talk will also touch upon the remaining challenges and outline future research directions in this area.

Downloads

Published

2024-03-24

How to Cite

Yao, H. (2024). Towards Reliable Learning in the Wild: Generalization and Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(20), 22683-22683. https://doi.org/10.1609/aaai.v38i20.30299