InteractEva: A Simulation-Based Evaluation Framework for Interactive AI Systems
Keywords:Human-in-the-loop, Interactive Machine Learning, Interactive AI, Evaluation, User Simulation
AbstractEvaluating interactive AI (IAI) systems is a challenging task, as their output highly depends on the performed user actions. As a result, developers often depend on limited and mostly qualitative data derived from user testing to improve their systems. In this paper, we present InteractEva; a systematic evaluation framework for IAI systems. InteractEva employs (a) a user simulation backend to test the system against different use cases and user interactions at scale with (b) an interactive frontend allowing developers to perform important quantitative evaluation tasks, including acquiring a performance overview, performing error analysis, and conducting what-if studies. The framework has supported the evaluation and improvement of an industrial IAI text extraction system, results of which will be presented during our demonstration.
How to Cite
Katsis, Y., Hanafi, M. F., Santillán Cooper, M., & Li, Y. (2022). InteractEva: A Simulation-Based Evaluation Framework for Interactive AI Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 13182-13184. https://doi.org/10.1609/aaai.v36i11.21721
AAAI Demonstration Track