Cyclically Disentangled Feature Translation for Face Anti-spoofing
DOI:
https://doi.org/10.1609/aaai.v37i3.25443Keywords:
CV: Biometrics, Face, Gesture & PoseAbstract
Current domain adaptation methods for face anti-spoofing leverage labeled source domain data and unlabeled target domain data to obtain a promising generalizable decision boundary. However, it is usually difficult for these methods to achieve a perfect domain-invariant liveness feature disentanglement, which may degrade the final classification performance by domain differences in illumination, face category, spoof type, etc. In this work, we tackle cross-scenario face anti-spoofing by proposing a novel domain adaptation method called cyclically disentangled feature translation network (CDFTN). Specifically, CDFTN generates pseudo-labeled samples that possess: 1) source domain-invariant liveness features and 2) target domain-specific content features, which are disentangled through domain adversarial training. A robust classifier is trained based on the synthetic pseudo-labeled images under the supervision of source domain labels. We further extend CDFTN for multi-target domain adaptation by leveraging data from more unlabeled target domains. Extensive experiments on several public datasets demonstrate that our proposed approach significantly outperforms the state of the art. Code and models are available at https://github.com/vis-face/CDFTN.Downloads
Published
2023-06-26
How to Cite
Yue, H., Wang, K., Zhang, G., Feng, H., Han, J., Ding, E., & Wang, J. (2023). Cyclically Disentangled Feature Translation for Face Anti-spoofing. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3358-3366. https://doi.org/10.1609/aaai.v37i3.25443
Issue
Section
AAAI Technical Track on Computer Vision III