Across-Model Collective Ensemble Classification

Authors

  • Hoda Eldardiry Purdue University
  • Jennifer Neville Purdue University

DOI:

https://doi.org/10.1609/aaai.v25i1.7934

Abstract

Ensemble classification methods that independently construct component models (e.g., bagging) improve accuracy over single models by reducing the error due to variance. Some work has been done to extend ensemble techniques for classification in relational domains by taking relational data characteristics or multiple link types into account during model construction. However, since these approaches follow the conventional approach to ensemble learning, they improve performance by reducing the error due to variance in learning. We note however, that variance in inference can be an additional source of error in relational methods that use collective classification, since inferred values are propagated during inference. We propose a novel ensemble mechanism for collective classification that reduces  both learning and inference variance, by incorporating prediction averaging into the collective inference process itself. We show that our proposed method significantly outperforms a straightforward relational ensemble baseline on both synthetic and real-world datasets.

Downloads

Published

2011-08-04

How to Cite

Eldardiry, H., & Neville, J. (2011). Across-Model Collective Ensemble Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 25(1), 343-349. https://doi.org/10.1609/aaai.v25i1.7934

Issue

Section

AAAI Technical Track: Machine Learning