DFEE: Interactive DataFlow Execution and Evaluation Kit
DOI:
https://doi.org/10.1609/aaai.v37i13.27073Keywords:
DataFlow, Semantic Parsing, Program Synthesis, Dialog2API, Temporal Reasoning, Event Scheduling, Execution AccuracyAbstract
DataFlow has been emerging as a new paradigm for building task-oriented chatbots due to its expressive semantic representations of the dialogue tasks. Despite the availability of a large dataset SMCalFlow and a simplified syntax, the development and evaluation of DataFlow-based chatbots remain challenging due to the system complexity and the lack of downstream toolchains. In this demonstration, we present DFEE, an interactive DataFlow Execution and Evaluation toolkit that supports execution, visualization and benchmarking of semantic parsers given dialogue input and backend database. We demonstrate the system via a complex dialog task: event scheduling that involves temporal reasoning. It also supports diagnosing the parsing results via a friendly interface that allows developers to examine dynamic DataFlow and the corresponding execution results. To illustrate how to benchmark SoTA models, we propose a novel benchmark that covers more sophisticated event scheduling scenarios and a new metric on task success evaluation. The codes of DFEE have been released on https://github.com/amazonscience/dataflow-evaluation-toolkit.Downloads
Published
2023-09-06
How to Cite
He, H., Feng, S., Bonadiman, D., Zhang, Y., & Mansour, S. (2023). DFEE: Interactive DataFlow Execution and Evaluation Kit. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16443-16445. https://doi.org/10.1609/aaai.v37i13.27073
Issue
Section
Demonstrations