Leveraging Common Structure to Improve Prediction across Related Datasets

Authors

  • Matt Barnes Carnegie Mellon University
  • Nick Gisolfi Carnegie Mellon University
  • Madalina Fiterau Carnegie Mellon University
  • Artur Dubrawski Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v29i1.9746

Keywords:

Outlier detection, density estimation

Abstract

In many applications, training data is provided in the form of related datasets obtained from several sources, which typically affects the sample distribution. The learned classification models, which are expected to perform well on similar data coming from new sources, often suffer due to bias introduced by what we call `spurious' samples -- those due to source characteristics and not representative of any other part of the data. As standard outlier detection and robust classification usually fall short of determining groups of spurious samples, we propose a procedure which identifies the common structure across datasets by minimizing a multi-dataset divergence metric, increasing accuracy for new datasets.

Downloads

Published

2015-03-04

How to Cite

Barnes, M., Gisolfi, N., Fiterau, M., & Dubrawski, A. (2015). Leveraging Common Structure to Improve Prediction across Related Datasets. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9746