Never-Ending Learning


  • Tom Mitchell Carnegie Mellon University
  • William Cohen Carnegie Mellon University
  • Estevam Hruschka University of Sao Carlos
  • Partha Talukdar Indian Institute of Science
  • Justin Betteridge Carnegie Mellon University
  • Andrew Carlson Google
  • Bhavana Dalvi Mishra Carnegien Mellon University
  • Matthew Gardner Carnegie Mellon University
  • Bryan Kisiel Carnegie Mellon University
  • Jayant Krishnamurthy Carnegie Mellon University
  • Ni Lao Google
  • Kathryn Mazaitis Carnegie Mellon University
  • Thahir Mohamed
  • Ndapa Nakashole Carnegie Mellon University
  • Emmanouil Platanios Carnegie Mellon University
  • Alan Ritter Ohioe State University
  • Mehdi Samadi Carnegie Mellon University
  • Burr Settles Duolingo
  • Richard Wang
  • Derry Wijaya Carnegie Mellon University
  • Abhinav Gupta Carnegie Mellon University
  • Xinlei Chen Carnegie Mellon University
  • Abulhair Saparov Carnegie Mellon University
  • Malcolm Greaves Alpine Data Lab
  • Joel Welling Pittsburgh Supercomputer Center



never ending learning, machine learning, read the web


Whereas people learn many different types of knowledge from diverse experiences over many years, most current machine learning systems acquire just a single function or data model from just a single data set. We propose a never-ending learning paradigm for machine learning, to better reflect the more ambitious and encompassing type of learning performed by humans. As a case study, we describe the Never-Ending Language Learner (NELL), which achieves some of the desired properties of a never-ending learner, and we discuss lessons learned. NELL has been learning to read the web 24 hours/day since January 2010, and so far has acquired a knowledge base with over 80 million confidence-weighted beliefs (e.g., servedWith(tea, biscuits)). NELL has also learned millions of features and parameters that enable it to read these beliefs from the web. Additionally, it has learned to reason over these beliefs to infer new beliefs, and is able to extend its ontology by synthesizing new relational predicates. NELL can be tracked online at, and followed on Twitter at @CMUNELL.




How to Cite

Mitchell, T., Cohen, W., Hruschka, E., Talukdar, P., Betteridge, J., Carlson, A., Dalvi Mishra, B., Gardner, M., Kisiel, B., Krishnamurthy, J., Lao, N., Mazaitis, K., Mohamed, T., Nakashole, N., Platanios, E., Ritter, A., Samadi, M., Settles, B., Wang, R., Wijaya, D., Gupta, A., Chen, X., Saparov, A., Greaves, M., & Welling, J. (2015). Never-Ending Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1).