Robust and Adaptive Deep Learning via Bayesian Principles

Authors

  • Yingzhen Li Department of Computing Imperial College London, UK

DOI:

https://doi.org/10.1609/aaai.v37i13.26813

Keywords:

New Faculty Highlights

Abstract

Deep learning models have achieved tremendous successes in accurate predictions for computer vision, natural language processing and speech recognition applications. However, to succeed in high-risk and safety-critical domains such as healthcare and finance, these deep learning models need to be made reliable and trustworthy. Specifically, they need to be robust and adaptive to real-world environments which can be drastically different from the training settings. In this talk, I will advocate for Bayesian principles to achieve the goal of building robust and adaptive deep learning models. I will introduce a suite of uncertainty quantification methods for Bayesian deep learning, and demonstrate applications en- abled by accurate uncertainty estimates, e.g., robust predic- tion, continual learning and repairing model failures. I will conclude by discussing the research challenges and potential impact for robust and adaptive deep learning models. This paper is part of the AAAI-23 New Faculty Highlights.

Downloads

Published

2024-07-15

How to Cite

Li, Y. (2024). Robust and Adaptive Deep Learning via Bayesian Principles. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15446-15446. https://doi.org/10.1609/aaai.v37i13.26813