Texture Reformer: Towards Fast and Universal Interactive Texture Transfer

Authors

  • Zhizhong Wang Zhejiang University
  • Lei Zhao Zhejiang University
  • Haibo Chen Zhejiang University
  • Ailin Li Zhejiang University
  • Zhiwen Zuo Zhejiang University
  • Wei Xing Zhejiang University
  • Dongming Lu Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v36i3.20164

Keywords:

Computer Vision (CV)

Abstract

In this paper, we present the texture reformer, a fast and universal neural-based framework for interactive texture transfer with user-specified guidance. The challenges lie in three aspects: 1) the diversity of tasks, 2) the simplicity of guidance maps, and 3) the execution efficiency. To address these challenges, our key idea is to use a novel feed-forward multi-view and multi-stage synthesis procedure consisting of I) a global view structure alignment stage, II) a local view texture refinement stage, and III) a holistic effect enhancement stage to synthesize high-quality results with coherent structures and fine texture details in a coarse-to-fine fashion. In addition, we also introduce a novel learning-free view-specific texture reformation (VSTR) operation with a new semantic map guidance strategy to achieve more accurate semantic-guided and structure-preserved texture transfer. The experimental results on a variety of application scenarios demonstrate the effectiveness and superiority of our framework. And compared with the state-of-the-art interactive texture transfer algorithms, it not only achieves higher quality results but, more remarkably, also is 2-5 orders of magnitude faster.

Downloads

Published

2022-06-28

How to Cite

Wang, Z., Zhao, L., Chen, H., Li, A., Zuo, Z., Xing, W., & Lu, D. (2022). Texture Reformer: Towards Fast and Universal Interactive Texture Transfer. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 2624-2632. https://doi.org/10.1609/aaai.v36i3.20164

Issue

Section

AAAI Technical Track on Computer Vision III