Predicting and Analyzing Language Specificity in Social Media Posts


  • Yifan Gao University of Texas at Austin
  • Yang Zhong University of Texas at Austin
  • Daniel Preoţiuc-Pietro University of Pennsylvania
  • Junyi Jessy Li University of Texas at Austin



In computational linguistics, specificity quantifies how much detail is engaged in text. It is an important characteristic of speaker intention and language style, and is useful in NLP applications such as summarization and argumentation mining. Yet to date, expert-annotated data for sentence-level specificity are scarce and confined to the news genre. In addition, systems that predict sentence specificity are classifiers trained to produce binary labels (general or specific).

We collect a dataset of over 7,000 tweets annotated with specificity on a fine-grained scale. Using this dataset, we train a supervised regression model that accurately estimates specificity in social media posts, reaching a mean absolute error of 0.3578 (for ratings on a scale of 1-5) and 0.73 Pearson correlation, significantly improving over baselines and previous sentence specificity prediction systems. We also present the first large-scale study revealing the social, temporal and mental health factors underlying language specificity on social media.




How to Cite

Gao, Y., Zhong, Y., Preoţiuc-Pietro, D., & Li, J. J. (2019). Predicting and Analyzing Language Specificity in Social Media Posts. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6415-6422.



AAAI Technical Track: Natural Language Processing