Leveraging Quality Prediction Models for Automatic Writing Feedback

Authors

  • Hamed Nilforoshan Columbia University
  • Eugene Wu Columbia University

DOI:

https://doi.org/10.1609/icwsm.v12i1.14998

Keywords:

writing feedback, writing quality, automated feedback, tree ensembles, product reviews, amazon, airbnb

Abstract

User-generated, multi-paragraph writing is pervasive and important in many social media platforms (i.e. Amazon reviews, AirBnB host profiles, etc). Ensuring high-quality content is important. Unfortunately, content submitted by users is often not of high quality. Moreover, the characteristics that constitute high quality may even vary between domains in ways that users are unaware of. Automated writing feedback has the potential to immediately point out and suggest improvements during the writing process. Most approaches, however, focus on syntax/phrasing, which is only one characteristic of high-quality content. Existing research develops accurate quality prediction models. We propose combining these models with model explanation techniques to identify writing features that, if changed, will most improve the text quality. To this end, we develop a perturbation-based explanation method for a popular class of models called tree-ensembles. Furthermore, we use a weak-supervision technique to adapt this method to generate feedback for specific text segments in addition to feedback for the entire document. Our user study finds that the perturbation-based approach, when combined with segment-specific feedback, can help improve writing quality on Amazon (review helpfulness) and Airbnb (host profile trustworthiness) by > 14% (3X improvement over recent automated feedback techniques).

Downloads

Published

2018-06-15

How to Cite

Nilforoshan, H., & Wu, E. (2018). Leveraging Quality Prediction Models for Automatic Writing Feedback. Proceedings of the International AAAI Conference on Web and Social Media, 12(1). https://doi.org/10.1609/icwsm.v12i1.14998