Advertisement

Cone-Beam Computed Tomography (CBCT) Segmentation by Adversarial Learning Domain Adaptation

  • Xiaoqian Jia
  • Sicheng Wang
  • Xiao Liang
  • Anjali Balagopal
  • Dan Nguyen
  • Ming Yang
  • Zhangyang Wang
  • Jim Xiuquan Ji
  • Xiaoning Qian
  • Steve JiangEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

Cone-beam computed tomography (CBCT) is increasingly used in radiotherapy for patient alignment and adaptive therapy where organ segmentation and target delineation are often required. However, due to the poor image quality, low soft tissue contrast, as well as the difficulty in acquiring segmentation labels on CBCT images, developing effective segmentation methods on CBCT has been a challenge. In this paper, we propose a deep model for segmenting organs in CBCT images without requiring labelled training CBCT images. By taking advantage of the available segmented computed tomography (CT) images, our adversarial learning domain adaptation method aims to synthesize CBCT images from CT images. Then the segmentation labels of the CT images can help train a deep segmentation network for CBCT images, using both CTs with labels and CBCTs without labels. Our adversarial learning domain adaptation is integrated with the CBCT segmentation network training with the designed loss functions. The synthesized CBCT images by pixel-level domain adaptation best capture the critical image features that help achieve accurate CBCT segmentation. Our experiments on the bladder images from Radiation Oncology clinics have shown that our CBCT segmentation with adversarial learning domain adaptation significantly improves segmentation accuracy compared to the existing methods without doing domain adaptation from CT to CBCT.

Keywords

CBCT segmentation CycleGAN Domain adaptation 

References

  1. 1.
    Chollet, F.: et al.: Keras. https://keras.io (2015)
  2. 2.
    Goodfellow, I., et al.: Generative adversarial nets. In: NIPS, Sherjil Ozair (2014)Google Scholar
  3. 3.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)Google Scholar
  4. 4.
    Hoffman, J., et al.: CyCADA: cycle consistent adversarial domain adaptation. In: ICML (2018)Google Scholar
  5. 5.
    Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR, pp. 5967–5976 (2017)Google Scholar
  6. 6.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  7. 7.
    Yang, H., et al.: Unpaired brain MR-to-CT synthesis using a structure-constrained CycleGAN. CoRR (2018)Google Scholar
  8. 8.
    Zhu, J.-Y., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networkss. In: ICCV (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Xiaoqian Jia
    • 1
    • 2
  • Sicheng Wang
    • 1
    • 3
  • Xiao Liang
    • 1
  • Anjali Balagopal
    • 1
  • Dan Nguyen
    • 1
  • Ming Yang
    • 1
  • Zhangyang Wang
    • 3
  • Jim Xiuquan Ji
    • 2
  • Xiaoning Qian
    • 2
  • Steve Jiang
    • 1
    Email author
  1. 1.Medical Artificial Intelligence and Automation Laboratory, Department of Radiation OncologyUniversity of Texas SouthwesternDallasUSA
  2. 2.Department of Electrical and Computer EngineeringTexas A&M UniversityCollege StationUSA
  3. 3.Department of Computer Science and EngineeringTexas A&M UniversityCollege StationUSA

Personalised recommendations