DetAIL: A Tool to Automatically Detect and Analyze Drift in Language
Keywords:Text Drift, Variational Auto-Encoders, Model Testing
AbstractMachine learning and deep learning-based decision making has become part of today's software. The goal of this work is to ensure that machine learning and deep learning-based systems are as trusted as traditional software. Traditional software is made dependable by following rigorous practice like static analysis, testing, debugging, verifying, and repairing throughout the development and maintenance life-cycle. Similarly for machine learning systems, we need to keep these models up to date so that their performance is not compromised. For this, current systems rely on scheduled re-training of these models as new data kicks in. In this work, we propose DetAIL, a tool to measure the data drift that takes place when new data kicks in so that one can adaptively re-train the models whenever re-training is actually required irrespective of schedules. In addition to that, we generate various explanations at sentence level and dataset level to capture why a given payload text has drifted.
How to Cite
Madaan, N., Manjunatha, A., Nambiar, H., Goel, A., Kumar, H., Saha, D., & Bedathur, S. (2023). DetAIL: A Tool to Automatically Detect and Analyze Drift in Language. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15767-15773. https://doi.org/10.1609/aaai.v37i13.26872
IAAI Technical Track on Innovative Tools for Enabling AI Application