XraySyn: Realistic View Synthesis From a Single Radiograph Through CT Priors

Authors

  • Cheng Peng Johns Hopkins University
  • Haofu Liao University of Rochester
  • Gina Wong Johns Hopkins University
  • Jiebo Luo University of Rochester
  • S. Kevin Zhou Chinese Academy of Sciences; Peng Cheng Laboratory, Shenzhen
  • Rama Chellappa Johns Hopkins University

Keywords:

Healthcare, Medicine & Wellness

Abstract

A radiograph visualizes the internal anatomy of a patient through the use of X-ray, which projects 3D information onto a 2D plane. Hence, radiograph analysis naturally requires physicians to relate their prior knowledge about 3D human anatomy to 2D radiographs. Synthesizing novel radiographic views in a small range can assist physicians in interpreting anatomy more reliably; however, radiograph view synthesis is heavily ill-posed, lacking in paired data, and lacking in differentiable operations to leverage learning-based approaches. To address these problems, we use Computed Tomography (CT) for radiograph simulation and design a differentiable projection algorithm, which enables us to achieve geometrically consistent transformations between the radiography and CT domains. Our method, XraySyn, can synthesize novel views on real radiographs through a combination of realistic simulation and finetuning on real radiographs. To the best of our knowledge, this is the first work on radiograph view synthesis. We show that by gaining an understanding of radiography in 3D space, our method can be applied to radiograph bone extraction and suppression without requiring groundtruth bone labels.

Downloads

Published

2021-05-18

How to Cite

Peng, C., Liao, H., Wong, G., Luo, J., Zhou, S. K., & Chellappa, R. (2021). XraySyn: Realistic View Synthesis From a Single Radiograph Through CT Priors. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1), 436-444. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16120

Issue

Section

AAAI Technical Track on Application Domains