Estimating Uncertainty Online Against an Adversary

Authors

  • Volodymyr Kuleshov Stanford University
  • Stefano Ermon Stanford University

DOI:

https://doi.org/10.1609/aaai.v31i1.10949

Keywords:

online learning, calibration, uncertainty estimation

Abstract

Assessing uncertainty is an important step towards ensuring the safety and reliability of machine learning systems. Existing uncertainty estimation techniques may fail when their modeling assumptions are not met, e.g. when the data distribution differs from the one seen at training time. Here, we propose techniques that assess a classification algorithm’s uncertainty via calibrated probabilities (i.e. probabilities that match empirical outcome frequencies in the long run) and which are guaranteed to be reliable (i.e. accurate and calibrated) on out-of-distribution input, including input generated by an adversary. This represents an extension of classical online learning that handles uncertainty in addition to guaranteeing accuracy under adversarial assumptions. We establish formal guarantees for our methods, and we validate them on two real-world problems: question answering and medical diagnosis from genomic data.

Downloads

Published

2017-02-13

How to Cite

Kuleshov, V., & Ermon, S. (2017). Estimating Uncertainty Online Against an Adversary. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10949