SSAT: A Symmetric Semantic-Aware Transformer Network for Makeup Transfer and Removal
Keywords:Computer Vision (CV)
AbstractMakeup transfer is not only to extract the makeup style of the reference image, but also to render the makeup style to the semantic corresponding position of the target image. However, most existing methods focus on the former and ignore the latter, resulting in a failure to achieve desired results. To solve the above problems, we propose a unified Symmetric Semantic-Aware Transformer (SSAT) network, which incorporates semantic correspondence learning to realize makeup transfer and removal simultaneously. In SSAT, a novel Symmetric Semantic Corresponding Feature Transfer (SSCFT) module and a weakly supervised semantic loss are proposed to model and facilitate the establishment of accurate semantic correspondence. In the generation process, the extracted makeup features are spatially distorted by SSCFT to achieve semantic alignment with the target image, then the distorted makeup features are combined with unmodified makeup irrelevant features to produce the final result. Experiments show that our method obtains more visually accurate makeup transfer results, and user study in comparison with other state-of-the-art makeup transfer methods reflects the superiority of our method. Besides, we verify the robustness of the proposed method in the difference of expression and pose, object occlusion scenes, and extend it to video makeup transfer.
How to Cite
Sun, Z., Chen, Y., & Xiong, S. (2022). SSAT: A Symmetric Semantic-Aware Transformer Network for Makeup Transfer and Removal. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 2325-2334. https://doi.org/10.1609/aaai.v36i2.20131
AAAI Technical Track on Computer Vision II