Partial Wasserstein Covering

Authors

  • Keisuke Kawano Toyota Central R&D Labs., Inc
  • Satoshi Koide Toyota Central R&D Labs., Inc.
  • Keisuke Otaki Toyota Central R&D Labs., Inc.

DOI:

https://doi.org/10.1609/aaai.v36i7.20671

Keywords:

Machine Learning (ML)

Abstract

We consider a general task called partial Wasserstein covering with the goal of providing information on what patterns are not being taken into account in a dataset (e.g., dataset used during development) compared to another (e.g., dataset obtained from actual applications). We model this task as a discrete optimization problem with partial Wasserstein divergence as an objective function. Although this problem is NP-hard, we prove that it satisfies the submodular property, allowing us to use a greedy algorithm with a 0.63 approximation. However, the greedy algorithm is still inefficient because it requires solving linear programming for each objective function evaluation. To overcome this inefficiency, we propose quasi-greedy algorithms, which consist of a series of techniques for acceleration such as sensitivity analysis based on strong duality and the so-called C-transform in the optimal transport field. Experimentally, we demonstrate that we can efficiently fill in the gaps between the two datasets, and find missing scene in real driving scene datasets.

Downloads

Published

2022-06-28

How to Cite

Kawano, K., Koide, S., & Otaki, K. (2022). Partial Wasserstein Covering. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7115-7123. https://doi.org/10.1609/aaai.v36i7.20671

Issue

Section

AAAI Technical Track on Machine Learning II