Advertisement

X-Ray In-Depth Decomposition: Revealing the Latent Structures

  • Shadi AlbarqouniEmail author
  • Javad Fotouhi
  • Nassir Navab
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10435)

Abstract

X-ray is the most readily available imaging modality and has a broad range of applications that spans from diagnosis to intra-operative guidance in cardiac, orthopedics, and trauma procedures. Proper interpretation of the hidden and obscured anatomy in X-ray images remains a challenge and often requires high radiation dose and imaging from several perspectives. In this work, we aim at decomposing the conventional X-ray image into d X-ray components of independent, non-overlapped, clipped sub-volume, that separate rigid structures into distinct layers, leaving all deformable organs in one layer, such that the sum resembles the original input. Our proposed model is validaed on 6 clinical datasets (\(\sim \)7200 X-ray images) in addition to 615 real chest X-ray images. Despite the challenging aspects of modeling such a highly ill-posed problem, exciting and encouraging results are obtained paving the path for further contributions in this direction.

References

  1. 1.
    Albarqouni, S., Konrad, U., Wang, L., Navab, N., Demirci, S.: Single-view x-ray depth recovery: toward a novel concept for image-guided interventions. Int. J. Comput. Assist. Radiol. Surgery 11(6), 873–880 (2016)CrossRefGoogle Scholar
  2. 2.
    Brady, A.P.: Error and discrepancy in radiology: inevitable or avoidable? Insights Imaging 8(1), 171–182 (2017)CrossRefGoogle Scholar
  3. 3.
    Candemir, S., Jaeger, S., Palaniappan, K., Musco, J.P., Singh, R.K., Xue, Z., Karargyris, A., Antani, S., Thoma, G., McDonald, C.J.: Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. IEEE Trans. Med. Imaging 33(2), 577–590 (2014)CrossRefGoogle Scholar
  4. 4.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE on CVPR, pp. 770–778 (2016)Google Scholar
  5. 5.
    Jaeger, S., Karargyris, A., Candemir, S., Folio, L., Siegelman, J., Callaghan, F., Xue, Z., Palaniappan, K., Singh, R.K., Antani, S., et al.: Automatic tuberculosis screening using chest radiographs. IEEE TMI 33(2), 233–245 (2014)Google Scholar
  6. 6.
    Kersten, M., Stewart, J., Troje, N., Ellis, R.: Enhancing depth perception in translucent volumes. IEEE Trans. Vis. Comput. Graph. 12(5), 1117–1124 (2006)CrossRefGoogle Scholar
  7. 7.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). doi: 10.1007/978-3-319-24574-4_28 CrossRefGoogle Scholar
  8. 8.
    Vedaldi, A., Lenc, K.: Matconvnet: Convolutional neural networks for MatLab. In: Proceeding of the 23rd ACM international conference on Multimedia, pp. 689–692 (2015)Google Scholar
  9. 9.
    Wang, J., Kreiser, M., Wang, L., Navab, N., Fallavollita, P.: Augmented depth perception visualization in 2D/3D image fusion. Comput. Med. Imaging Graph. 38(8), 744–752 (2014)CrossRefGoogle Scholar
  10. 10.
    Wieczorek, M., Aichert, A., Fallavollita, P., Kutter, O., Ahmadi, A., Wang, L., Navab, N.: Interactive 3D visualization of a single-view X-ray image. In: Fichtinger, G., Martel, A., Peters, T. (eds.) MICCAI 2011. LNCS, vol. 6891, pp. 73–80. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-23623-5_10 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Shadi Albarqouni
    • 1
    Email author
  • Javad Fotouhi
    • 2
  • Nassir Navab
    • 1
    • 2
  1. 1.Computer Aided Medical Procedures (CAMP)Technische Universität MünchenMunichGermany
  2. 2.Whiting School of EngineeringJohns Hopkins UniversityBaltimoreUSA

Personalised recommendations