Advertisement

R\(^{2}\)-Net: Recurrent and Recursive Network for Sparse-View CT Artifacts Removal

  • Tiancheng Shen
  • Xia Li
  • Zhisheng Zhong
  • Jianlong Wu
  • Zhouchen LinEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11769)

Abstract

We propose a novel neural network architecture to reduce streak artifacts generated in sparse-view 2D Computed Tomography image reconstruction. This architecture decomposes the streak artifacts removal into multiple stages through the recurrent mechanism, which can fully utilize information in previous stages and guide the learning of later stages. In each recurrent stage, the key components of the architecture operate recursively. The recursive mechanism is helpful to save parameters and enlarge the receptive field efficiently with exponentially increased dilation of convolution. To verify its effectiveness, we conduct experiments on the AAPM’s CT dataset through 5-fold cross-validation. Our proposed method outperforms the state-of-the-art methods both quantitatively and qualitatively.

Keywords

Computed Tomography Sparse-view reconstruction Convolutional recurrent neural network 

Notes

Acknowledgment

We thank Dr. Cynthia McCollough (the Mayo Clinic, USA) for providing CT data of Low Dose CT Grand Challenge for research purpose.

Zhouchen Lin is supported by National Basic Research Program of China (973 Program) (grant no. 2015CB352502), National Natural Science Foundation (NSF) of China (grant nos. 61625301 and 61731018), and Microsoft Research Asia.

Supplementary material

490281_1_En_36_MOESM1_ESM.pdf (3.5 mb)
Supplementary material 1 (pdf 3543 KB)

References

  1. 1.
    Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: IEEE conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)Google Scholar
  2. 2.
    Zhang, Z., Liang, X., Dong, X., et al.: A sparse-view CT reconstruction method based on combination of DenseNet and deconvolution. IEEE Trans. Med. Imaging 37(6), 1407–1417 (2018)CrossRefGoogle Scholar
  3. 3.
    Han, Y., Ye, J.C.: Framing U-Net via deep convolutional framelets: application to sparse-view CT. IEEE Trans. Med. Imaging 37(6), 1418–1429 (2018)CrossRefGoogle Scholar
  4. 4.
    Kak, A.C., Slaney, M., Wang, G.: Principles of computerized tomographic imaging. Med. Phys. 29(1), 107 (2002)CrossRefGoogle Scholar
  5. 5.
    Sidky, E.Y., Pan, X.: Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys. Med. Biol. 53(17), 4777 (2008)CrossRefGoogle Scholar
  6. 6.
    Chen, G.H., Tang, J., Leng, S.: Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. Med. Phys. 35(2), 660–663 (2008)CrossRefGoogle Scholar
  7. 7.
    Liu, Y., Ma, J., Fan, Y., et al.: Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction. Phys. Med. Biol. 57(23), 7923 (2012)CrossRefGoogle Scholar
  8. 8.
    Liu, Y., Liang, Z., Ma, J., et al.: Total variation-stokes strategy for sparse-view X-ray CT image reconstruction. IEEE Trans. Med. Imaging 33(3), 749–763 (2014)CrossRefGoogle Scholar
  9. 9.
    Chen, Y., Shi, L., Feng, Q., et al.: Artifact suppressed dictionary learning for low-dose CT image processing. IEEE Trans. Med. Imaging 33(12), 2271–2292 (2014)CrossRefGoogle Scholar
  10. 10.
    Chen, Y., Yang, Z., Hu, Y., et al.: Thoracic low-dose CT image processing using an artifact suppressed large-scale nonlocal means. Phys. Med. Biol. 57(9), 2667 (2012)CrossRefGoogle Scholar
  11. 11.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  12. 12.
    Kofler, A., Haltmeier, M., Kolbitsch, C., Kachelrieß, M., Dewey, M.: A U-Nets cascade for sparse view computed tomography. In: Knoll, F., Maier, A., Rueckert, D. (eds.) MLMIR 2018. LNCS, vol. 11074, pp. 91–99. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00129-2_11CrossRefGoogle Scholar
  13. 13.
    AAPM Low Dose CT Grand Challenge Homepage. https://www.aapm.org/grandchallenge/lowdosect/. Accessed 3 July 2019

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Tiancheng Shen
    • 1
  • Xia Li
    • 2
  • Zhisheng Zhong
    • 2
  • Jianlong Wu
    • 2
    • 3
  • Zhouchen Lin
    • 2
    Email author
  1. 1.Center for Data SciencePeking UniversityBeijingChina
  2. 2.Key Laboratory of Machine Perception (MOE), School of EECSPeking UniversityBeijingChina
  3. 3.School of Computer Science and TechnologyShandong UniversityTsingtaoChina

Personalised recommendations