Generative Adversarial Networks for Video-to-Video Domain Adaptation

Authors

  • Jiawei Chen YouTu Lab
  • Yuexiang Li YouTu Lab
  • Kai Ma YouTu Lab
  • Yefeng Zheng YouTu Lab

DOI:

https://doi.org/10.1609/aaai.v34i04.5750

Abstract

Endoscopic videos from multicentres often have different imaging conditions, e.g., color and illumination, which make the models trained on one domain usually fail to generalize well to another. Domain adaptation is one of the potential solutions to address the problem. However, few of existing works focused on the translation of video-based data. In this work, we propose a novel generative adversarial network (GAN), namely VideoGAN, to transfer the video-based data across different domains. As the frames of a video may have similar content and imaging conditions, the proposed VideoGAN has an X-shape generator to preserve the intra-video consistency during translation. Furthermore, a loss function, namely color histogram loss, is proposed to tune the color distribution of each translated frame. Two colonoscopic datasets from different centres, i.e., CVC-Clinic and ETIS-Larib, are adopted to evaluate the performance of domain adaptation of our VideoGAN. Experimental results demonstrate that the adapted colonoscopic video generated by our VideoGAN can significantly boost the segmentation accuracy, i.e., an improvement of 5%, of colorectal polyps on multicentre datasets. As our VideoGAN is a general network architecture, we also evaluate its performance with the CamVid driving video dataset on the cloudy-to-sunny translation task. Comprehensive experiments show that the domain gap could be substantially narrowed down by our VideoGAN.

Downloads

Published

2020-04-03

How to Cite

Chen, J., Li, Y., Ma, K., & Zheng, Y. (2020). Generative Adversarial Networks for Video-to-Video Domain Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3462-3469. https://doi.org/10.1609/aaai.v34i04.5750

Issue

Section

AAAI Technical Track: Machine Learning