Tap and Shoot Segmentation

Authors

  • Ding-Jie Chen National Tsing Hua University
  • Jui-Ting Chien National Tsing Hua University
  • Hwann-Tzong Chen National Tsing Hua University
  • Long-Wen Chang National Tsing Hua University

DOI:

https://doi.org/10.1609/aaai.v32i1.11906

Keywords:

Interactive image segmentation

Abstract

We present a new segmentation method that leverages latent photographic information available at the moment of taking pictures. Photography on a portable device is often done by tapping to focus before shooting the picture. This tap-and-shoot interaction for photography not only specifies the region of interest but also yields useful focus/defocus cues for image segmentation. However, most of the previous interactive segmentation methods address the problem of image segmentation in a post-processing scenario without considering the action of taking pictures. We propose a learning-based approach to this new tap-and-shoot scenario of interactive segmentation. The experimental results on various datasets show that, by training a deep convolutional network to integrate the selection and focus/defocus cues, our method can achieve higher segmentation accuracy in comparison with existing interactive segmentation methods.

Downloads

Published

2018-04-26

How to Cite

Chen, D.-J., Chien, J.-T., Chen, H.-T., & Chang, L.-W. (2018). Tap and Shoot Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11906

Issue

Section

Main Track: Machine Learning Applications