Label-Efficient Data Augmentation with Video Diffusion Models for Guidewire Segmentation in Cardiac Fluoroscopy

Authors

  • Shaoyan Pan Emory University
  • Yikang Liu United Imaging Intelligence
  • Lin Zhao United Imaging Intelligence
  • Eric Z. Chen United Imaging Intelligence
  • Xiao Chen United Imaging Intelligence
  • Terrence Chen United Imaging Intelligence
  • Shanhui Sun United Imaging Intelligence

DOI:

https://doi.org/10.1609/aaai.v39i6.32675

Abstract

The accurate segmentation of guidewires in interventional cardiac fluoroscopy videos is crucial for computer-aided navigation tasks. Although deep learning methods have demonstrated high accuracy and robustness in wire segmentation, they require substantial annotated datasets for generalizability, underscoring the need for extensive labeled data to enhance model performance. To address this challenge, we propose the Segmentation-guided Frame-consistency Video Diffusion Model (SF-VD) to generate large collections of labeled fluoroscopy videos, augmenting the training data for wire segmentation networks. SF-VD leverages videos with limited annotations by independently modeling scene distribution and motion distribution. It first samples the scene distribution by generating 2D fluoroscopy images with wires positioned according to a specified input mask, and then samples the motion distribution by progressively generating subsequent frames, ensuring frame-to-frame coherence through a frame-consistency strategy. A segmentation-guided mechanism further refines the process by adjusting wire contrast, ensuring a diverse range of visibility in the synthesized image. Evaluation on a fluoroscopy dataset confirms the superior quality of the generated videos and shows significant improvements in guidewire segmentation.

Published

2025-04-11

How to Cite

Pan, S., Liu, Y., Zhao, L., Chen, E. Z., Chen, X., Chen, T., & Sun, S. (2025). Label-Efficient Data Augmentation with Video Diffusion Models for Guidewire Segmentation in Cardiac Fluoroscopy. Proceedings of the AAAI Conference on Artificial Intelligence, 39(6), 6308–6316. https://doi.org/10.1609/aaai.v39i6.32675

Issue

Section

AAAI Technical Track on Computer Vision V