TY - JOUR AU - Manam, V. K. Chaithanya AU - Thomas, Joseph Divyan AU - Quinn, Alexander J. PY - 2022/10/14 Y2 - 2024/03/29 TI - TaskLint: Automated Detection of Ambiguities in Task Instructions JF - Proceedings of the AAAI Conference on Human Computation and Crowdsourcing JA - HCOMP VL - 10 IS - 1 SE - Full Archival Papers DO - 10.1609/hcomp.v10i1.21996 UR - https://ojs.aaai.org/index.php/HCOMP/article/view/21996 SP - 160-172 AB - Clear instructions are a necessity for obtaining accurate results from crowd workers. Even small ambiguities can force workers to choose an interpretation arbitrarily, resulting in errors and inconsistency. Crisp instructions require significant time to design, test, and iterate. Recent approaches have engaged workers to detect and correct ambiguities. However, this process increases the time and money required to obtain accurate, consistent results. We present TaskLint, a system to automatically detect problems with task instructions. Leveraging a diverse set of existing NLP tools, TaskLint identifies words and sentences that might foretell worker confusion. This is analogous to static analysis tools for code ("linters"), which detect possible features in code that might indicate the presence of bugs. Our evaluation of TaskLint using task instructions created by novices confirms the potential for static tools to improve task clarity and the accuracy of results, while also highlighting several challenges. ER -