Authorship Attribution Using a Neural Network Language Model

Authors

  • Zhenhao Ge Purdue University
  • Yufang Sun Purdue University
  • Mark Smith Purdue University

DOI:

https://doi.org/10.1609/aaai.v30i1.9924

Keywords:

neural networks, language modeling, text classification

Abstract

In practice, training language models for individual authors is often expensive because of limited data resources. In such cases, Neural Network Language Models (NNLMs), generally outperform the traditional non-parametric N-gram models. Here we investigate the performance of a feed-forward NNLM on an authorship attribution problem, with moderate author set size and relatively limited data. We also consider how the text topics impact performance. Compared with a well-constructed N-gram baseline method with Kneser-Ney smoothing, the proposed method achieves nearly 2.5% reduction in perplexity and increases author classification accuracy by 3.43% on average, given as few as 5 test sentences. The performance is very competitive with the state of the art in terms of accuracy and demand on test data.

Downloads

Published

2016-03-05

How to Cite

Ge, Z., Sun, Y., & Smith, M. (2016). Authorship Attribution Using a Neural Network Language Model. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.9924