TaskLint: Automated Detection of Ambiguities in Task Instructions


  • V. K. Chaithanya Manam Purdue University
  • Joseph Divyan Thomas Purdue University
  • Alexander J. Quinn Purdue University




Crowdsourcing, Task Design, Instructions, Textual Ambiguity, Lint Tools, Linters, Writing Support, Static Analysis, Natural Language Processing


Clear instructions are a necessity for obtaining accurate results from crowd workers. Even small ambiguities can force workers to choose an interpretation arbitrarily, resulting in errors and inconsistency. Crisp instructions require significant time to design, test, and iterate. Recent approaches have engaged workers to detect and correct ambiguities. However, this process increases the time and money required to obtain accurate, consistent results. We present TaskLint, a system to automatically detect problems with task instructions. Leveraging a diverse set of existing NLP tools, TaskLint identifies words and sentences that might foretell worker confusion. This is analogous to static analysis tools for code ("linters"), which detect possible features in code that might indicate the presence of bugs. Our evaluation of TaskLint using task instructions created by novices confirms the potential for static tools to improve task clarity and the accuracy of results, while also highlighting several challenges.




How to Cite

Manam, V. K. C., Thomas, J. D., & Quinn, A. J. (2022). TaskLint: Automated Detection of Ambiguities in Task Instructions. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 10(1), 160-172. https://doi.org/10.1609/hcomp.v10i1.21996