Unsupervised Coherent Video Cartoonization with Perceptual Motion Consistency
Keywords:Computer Vision (CV)
AbstractIn recent years, creative content generations like style transfer and neural photo editing have attracted more and more attention. Among these, cartoonization of real-world scenes has promising applications in entertainment and industry. Different from image translations focusing on improving the style effect of generated images, video cartoonization has additional requirements on the temporal consistency. In this paper, we propose a spatially-adaptive semantic alignment framework with perceptual motion consistency for coherent video cartoonization in an unsupervised manner. The semantic alignment module is designed to restore deformation of semantic structure caused by spatial information lost in the encoder-decoder architecture. Furthermore, we introduce the spatio-temporal correlative map as a style-independent, global-aware regularization on perceptual motion consistency. Deriving from similarity measurement of high-level features in photo and cartoon frames, it captures global semantic information beyond raw pixel-value of optical flow. Besides, the similarity measurement disentangles temporal relationship from domain-specific style properties, which helps regularize the temporal consistency without hurting style effects of cartoon images. Qualitative and quantitative experiments demonstrate our method is able to generate highly stylistic and temporal consistent cartoon videos.
How to Cite
Liu, Z., Li, L., Jiang, H., Jin, X., Tu, D., Wang, S., & Zha, Z.-J. (2022). Unsupervised Coherent Video Cartoonization with Perceptual Motion Consistency. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1846-1853. https://doi.org/10.1609/aaai.v36i2.20078
AAAI Technical Track on Computer Vision II