RTGaze: Real-Time 3D-Aware Gaze Redirection from a Single Image

Authors

  • Hengfei Wang University of Birmingham
  • Zhongqun Zhang University of Birmingham Nankai University
  • Yihua Cheng University of Birmingham
  • Hyung Jin Chang University of Birmingham

DOI:

https://doi.org/10.1609/aaai.v40i12.37942

Abstract

Gaze redirection methods aim to generate realistic human face images with controllable eye movement. However, recent methods often struggle with 3D consistency, efficiency, or quality, limiting their practical applications. In this work, we propose RTGaze, a real-time and high-quality gaze redirection method. Our approach learns a gaze-controllable facial representation from face images and gaze prompts, then decodes this representation via neural rendering for gaze redirection. Additionally, we distill face geometric priors from a pretrained 3D portrait generator to enhance generation quality. We evaluate RTGaze both qualitatively and quantitatively, demonstrating state-of-the-art performance in efficiency, redirection accuracy, and image quality across multiple datasets. Our system achieves real-time, 3D-aware gaze redirection with a feedforward network (~0.06 sec/image), making it 800× faster than the previous state-of-the-art 3D-aware methods.

Downloads

Published

2026-03-14

How to Cite

Wang, H., Zhang, Z., Cheng, Y., & Chang, H. J. (2026). RTGaze: Real-Time 3D-Aware Gaze Redirection from a Single Image. Proceedings of the AAAI Conference on Artificial Intelligence, 40(12), 9784–9792. https://doi.org/10.1609/aaai.v40i12.37942

Issue

Section

AAAI Technical Track on Computer Vision IX