Evaluating and Improving Interactions with Hazy Oracles

Authors

  • Stephan J. Lemmer University of Michigan, Ann Arbor, MI
  • Jason J. Corso University of Michigan, Ann Arbor, MI

DOI:

https://doi.org/10.1609/aaai.v37i5.25746

Keywords:

HAI: Human-Computer Interaction, ML: Evaluation and Analysis (Machine Learning)

Abstract

Many AI systems integrate sensor inputs, world knowledge, and human-provided information to perform inference. While such systems often treat the human input as flawless, humans are better thought of as hazy oracles whose input may be ambiguous or outside of the AI system's understanding. In such situations it makes sense for the AI system to defer its inference while it disambiguates the human-provided information by, for example, asking the human to rephrase the query. Though this approach has been considered in the past, current work is typically limited to application-specific methods and non-standardized human experiments. We instead introduce and formalize a general notion of deferred inference. Using this formulation, we then propose a novel evaluation centered around the Deferred Error Volume (DEV) metric, which explicitly considers the tradeoff between error reduction and the additional human effort required to achieve it. We demonstrate this new formalization and an innovative deferred inference method on the disparate tasks of Single-Target Video Object Tracking and Referring Expression Comprehension, ultimately reducing error by up to 48% without any change to the underlying model or its parameters.

Downloads

Published

2023-06-26

How to Cite

Lemmer, S. J., & Corso, J. J. (2023). Evaluating and Improving Interactions with Hazy Oracles. Proceedings of the AAAI Conference on Artificial Intelligence, 37(5), 6039-6047. https://doi.org/10.1609/aaai.v37i5.25746

Issue

Section

AAAI Technical Track on Humans and AI