Skip to main content

Advertisement

Log in

Technical and clinical overview of deep learning in radiology

  • Invited Review
  • Published:
Japanese Journal of Radiology Aims and scope Submit manuscript

Abstract

Deep learning has been applied to clinical applications in not only radiology, but also all other areas of medicine. This review provides a technical and clinical overview of deep learning in radiology. To gain a more practical understanding of deep learning, deep learning techniques are divided into five categories: classification, object detection, semantic segmentation, image processing, and natural language processing. After a brief overview of technical network evolutions, clinical applications based on deep learning are introduced. The clinical applications are then summarized to reveal the features of deep learning, which are highly dependent on training and test datasets. The core technology in deep learning is developed by image classification tasks. In the medical field, radiologists are specialists in such tasks. Using clinical applications based on deep learning would, therefore, be expected to contribute to substantial improvements in radiology. By gaining a better understanding of the features of deep learning, radiologists could be expected to lead medical development.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Abbreviations

NLP:

Natural language processing

ANN:

Artificial neural network

AUC:

Area under the curve

ROC:

Receiver operating characteristic

CNN:

Convolutional neural network

SR:

Super resolution

LR:

Low resolution

HR:

High resolution

GAN:

Generative adversarial networks

NAS:

Neural architecture search

ILSVRC:

ImageNet large-scale visual recognition challenge

FCN:

Fully convolutional network

CRF:

Conditional random field

R-CNN:

Regions with convolutional neural network features

YOLO:

You only look once

SSD:

Single shot MultiBox detector

PSP:

Pyramid scene parsing

FSRCNN:

Fast super resolution convolutional neural network

ESPCN:

Efficient sub-pixel convolutional neural network

VDSR:

Very deep super resolution

DRCN:

Deeply-recursive convolutional network

EDSR:

Enhanced deep super resolution network

RDN:

Residual dense network

DBPN:

Deep back-projection networks

ZSSR:

Zero shot super resolution

CBOW:

Continuous bag-of-words

GloVe:

Global vectors for word representation

DCGAN:

Deep convolutional generative adversarial network

XOGAN:

Generative adversarial network with XO-structure

ENAS:

Efficient neural architecture search

DARTS:

Differentiable architecture search

NAO:

Neural architecture optimization

HMH:

Hemorrhage, mass effect, or hydrocephalus

CT:

Computed tomography

SAI:

Suspected acute infarct

HCC:

Hepato-cellular carcinoma

MR:

Magnetic resonance

MCI:

Mild cognitive impairment

ICH:

Intracranial hemorrhage

EDH/SDH:

Epidural/subdural hemorrhage

SAH:

Subarachnoid hemorrhage

ASL:

Arterial spin labeling

VN:

Variational network

PICS:

Parallel imaging and compressed sensing

DnCNN:

Denoising convolutional neural network

PE:

Pulmonary embolism

AI:

Artificial intelligence

References

  1. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436.

    Article  CAS  Google Scholar 

  2. Deng L, Yu D. Deep learning: methods and applications. Foundations and Trends®. Signal Processing. 2014;7:197–387.

    Google Scholar 

  3. Goodfellow I, Bengio Y, Courville A, Bengio Y. Deep learning. Cambridge: MIT Press; 2016.

    Google Scholar 

  4. Hebb DO. The organization of behavior: a neurophysiological approach. New York: Wiley; 1949.

    Google Scholar 

  5. McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys. 1943;5:115–33.

    Article  Google Scholar 

  6. Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev. 1958;65:386.

    Article  CAS  Google Scholar 

  7. Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature. 1986;323:533.

    Article  Google Scholar 

  8. Bengio Y, Lamblin P, Popovici D, Larochelle H. Greedy layer-wise training of deep networks. In: Advances in neural information processing systems. 2007. p. 153–60.

  9. Hinton GE, Osindero S, Teh Y-W. A fast learning algorithm for deep belief nets. Neural Comput. 2006;18:1527–54.

    Article  Google Scholar 

  10. Poultney C, Chopra S, Cun YL. Efficient learning of sparse representations with an energy-based model. In: Advances in neural information processing systems. 2007. p. 1137–44.

  11. Asada N, Doi K, MacMahon H, et al. Potential usefulness of an artificial neural network for differential diagnosis of interstitial lung diseases: pilot study. Radiology. 1990;177:857-60.

    Article  Google Scholar 

  12. Cicero M, Bilbily A, Colak E, et al. Training and validating a deep convolutional neural network for computer-aided detection and classification of abnormalities on frontal chest radiographs. Invest Radiol. 2017;52:281–7.

    Article  Google Scholar 

  13. Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 1–9.

  14. Yasaka K, Akai H, Kunimatsu A, Kiryu S, Abe O. Deep learning with convolutional neural network in radiology. Jpn J Radiol. 2018;36(4):257–72.

    Article  Google Scholar 

  15. Fukushima K, Miyake S. Neocognitron: a new algorithm for pattern recognition tolerant of deformations and shifts in position. Pattern Recogn. 1982;15:455–69.

    Article  Google Scholar 

  16. Hubel DH, Wiesel TN. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol. 1962;160:106–54.

    Article  CAS  Google Scholar 

  17. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. p. 1097–105.

  18. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014. arXiv:1409.1556.

  19. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770–8.

  20. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. In: CVPR. 2017. p. 3.

  21. Zhao Z-Q, Zheng P, Xu S-t, Wu X. Object detection with deep learning: a review. 2018. arXiv:1807.05511.

  22. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. p. 580–7.

  23. Girshick R. Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision. 2015. p. 1440–8.

  24. Ren S, He K, Girshick R, Sun J. Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems. 2015. p. 91–9.

  25. He K, Gkioxari G, Dollár P, Girshick R. Mask r-cnn. In: IEEE transactions on pattern analysis and machine intelligence. 2018.

  26. Erhan D, Szegedy C, Toshev A, Anguelov D. Scalable object detection using deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. p. 2147–54.

  27. Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 779–88.

  28. Liu W, Anguelov D, Erhan D, et al. Ssd: Single shot multibox detector. In: European conference on computer vision. Springer; 2016. p. 21–37.

  29. Lin T-Y, Goyal P, Girshick R, He K, Dollár P. Focal loss for dense object detection. In: IEEE transactions on pattern analysis and machine intelligence. 2018.

  30. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 3431–40.

  31. Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL. Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell. 2018;40:834–48.

    Article  Google Scholar 

  32. Lin G, Milan A, Shen C, Reid ID. RefineNet: multi-path refinement networks for high-resolution semantic segmentation. In: Cvpr. 2017. p. 5.

  33. Zhao H, Shi J, Qi X, Wang X, Jia J. Pyramid scene parsing network. In: IEEE conf on computer vision and pattern recognition (CVPR). 2017. p. 2881–90.

  34. Jégou S, Drozdzal M, Vazquez D, Romero A, Bengio Y. The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation. In: Computer vision and pattern recognition workshops (CVPRW), 2017 IEEE conference. IEEE; 2017. p. 1175–83.

  35. Badrinarayanan V, Kendall A, Cipolla R. Segnet: a deep convolutional encoder-decoder architecture for image segmentation. 2015. arXiv:1511.00561.

  36. Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error propagation. California Univ San Diego La Jolla Inst for Cognitive Science; 1985.

  37. Garcia-Garcia A, Orts-Escolano S, Oprea S, Villena-Martinez V, Garcia-Rodriguez J. A review on deep learning techniques applied to semantic segmentation. 2017. arXiv:1704.06857.

  38. Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL. Semantic image segmentation with deep convolutional nets and fully connected crfs. 2014. arXiv:1412.7062.

  39. Eigen D, Fergus R. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: Proceedings of the IEEE international conference on computer vision. 2015. p. 2650–8.

  40. Liu W, Rabinovich A, Berg AC. Parsenet: Looking wider to see better. 2015. arXiv:1506.04579.

  41. Pinheiro PO, Lin T-Y, Collobert R, Dollár P. Learning to refine object segments. In: European conference on computer vision. Springer; 2016. p. 75–91.

  42. Krähenbühl P, Koltun V. Parameter learning and convergent inference for dense random fields. In: International conference on machine learning. 2013. p. 513–21.

  43. Krähenbühl P, Koltun V. Efficient inference in fully connected crfs with gaussian edge potentials. In: Advances in neural information processing systems; 2011. p. 109–17.

  44. Yu F, Koltun V. Multi-scale context aggregation by dilated convolutions. 2015. arXiv:1511.07122.

  45. Yang T, Wu Y, Zhao J, Guan L. Semantic segmentation via highly fused convolutional network with multiple soft cost functions. Cognit Syst Res. 2018. arXiv:1801.01317

  46. Park SC, Park MK, Kang MG. Super-resolution image reconstruction: a technical overview. IEEE Signal Process Mag. 2003;20:21–36.

    Article  Google Scholar 

  47. Dong C, Loy CC, He K, Tang X. Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell. 2016;38:295–307.

    Article  Google Scholar 

  48. Dong C, Loy CC, He K, Tang X. Learning a deep convolutional network for image super-resolution. In: European conference on computer vision. Springer; 2014. p. 184–99.

  49. Dong C, Loy CC, Tang X. Accelerating the super-resolution convolutional neural network. In: European conference on computer vision. Springer; 2016. p. 391–407.

  50. Shi W, Caballero J, Huszár F, et al. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 1874–83.

  51. Kim J, Kwon Lee J, Mu Lee K. Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 1646–54.

  52. Kim J, Kwon Lee J, Mu Lee K. Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 1637–45.

  53. Ledig C, Theis L, Huszár F, et al. Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR. 2017. p. 4.

  54. Tai Y, Yang J, Liu X. Image super-resolution via deep recursive residual network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 5.

  55. Lim B, Son S, Kim H, Nah S, Lee KM. Enhanced deep residual networks for single image super-resolution. In: The IEEE conference on computer vision and pattern recognition (CVPR) workshops. 2017. p. 4.

  56. Tong T, Li G, Liu X, Gao Q. Image super-resolution using dense skip connections. In: Computer vision (ICCV), 2017 IEEE international conference. IEEE; 2017. p. 4809–17.

  57. Tai Y, Yang J, Liu X, Xu C. Memnet: A persistent memory network for image restoration. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 4539–47.

  58. Zhang Y, Tian Y, Kong Y, Zhong B, Fu Y. Residual dense network for image super-resolution. In: The IEEE conference on computer vision and pattern recognition (CVPR). 2018.

  59. Haris M, Shakhnarovich G, Ukita N. Deep backprojection networks for super-resolution. In: Conference on computer vision and pattern recognition. 2018.

  60. Shocher A, Cohen N, Irani M. Zero-Shot” super-resolution using deep internal learning. In: Conference on computer vision and pattern recognition (CVPR). 2018.

  61. Young T, Hazarika D, Poria S, Cambria E. Recent trends in deep learning based natural language processing. 2017. arXiv:1708.02709.

  62. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J. Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems. 2013. p. 3111–9.

  63. Pennington J, Socher R, Manning C. Glove: global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 2014. p. 1532–43.

  64. Bojanowski P, Grave E, Joulin A, Mikolov T. Enriching word vectors with subword information. 2016. arXiv:1607.04606.

  65. Joulin A, Grave E, Bojanowski P, Mikolov T. Bag of tricks for efficient text classification. 2016. arXiv:1607.01759.

  66. Shannon CE. A mathematical theory of communication. In: ACM SIGMOBILE mobile computing and communications review, vol. 5. 2001. p. 3–55.

  67. Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K, Kuksa P. Natural language processing (almost) from scratch. J Mach Learn Res. 2011;12:2493–537.

    Google Scholar 

  68. Elman JL. Finding structure in time. Cognit Sci. 1990;14:179–211.

    Article  Google Scholar 

  69. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9:1735–80.

    Article  CAS  Google Scholar 

  70. Gers FA, Schmidhuber J, Cummins F. Learning to forget: continual prediction with LSTM. 1999.

  71. Cho K, Van Merriënboer B, Gulcehre C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. 2014. arXiv:1406.1078.

  72. Goller C, Kuchler A. Learning task-dependent distributed representations by backpropagation through structure. Neural Netw. 1996;1:347–52.

    Google Scholar 

  73. Graves A, Wayne G, Danihelka I. Neural turing machines. 2014. arXiv:1410.5401.

  74. Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. 2014. arXiv:1409.0473.

  75. Santoro A, Bartunov S, Botvinick M, Wierstra D, Lillicrap T. Meta-learning with memory-augmented neural networks. In: International conference on machine learning. 2016. p. 1842–50.

  76. Hertel L, Barth E, Käster T, Martinetz T. Deep convolutional neural networks as generic feature extractors. In: Neural networks (IJCNN), 2015 international joint conference. IEEE; 2015. p. 1–4.

  77. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. In: Advances in neural information processing systems. 2014. p. 2672–80.

  78. Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. 2015. arXiv:1511.06434.

  79. Isola P, Zhu J-Y, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. 2017. arXiv:1611.07004

  80. Zhu J-Y, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. 2017. arXiv:1703.10593

  81. Zhang Y. XOGAN: one-to-many unsupervised image-to-image translation. 2018. arXiv:1805.07277.

  82. Zhang Y, Gan Z, Fan K, et al. Adversarial feature matching for text generation. 2017. arXiv:1706.03850.

  83. Yu L, Zhang W, Wang J, Yu Y. SeqGAN: sequence generative adversarial nets with policy gradient. In: AAAI. 2017. p. 2852–858.

  84. Fedus W, Goodfellow I, Dai AM. Maskgan: better text generation via filling in the _. 2018. arXiv:180107736.

  85. Mortazi A, Bagci U. Automatically designing CNN architectures for medical image segmentation. In: International workshop on machine learning in medical imaging. Springer; 2018. p. 98–106.

  86. Xie L, Yuille AL. Genetic CNN. In: ICCV; 2017. p. 1388–97.

  87. Zoph B, Vasudevan V, Shlens J, Le QV. Learning transferable architectures for scalable image recognition. 2017. p. 2. arXiv:1707.07012.

  88. Pham H, Guan MY, Zoph B, Le QV, Dean J. Efficient neural architecture search via parameter sharing. 2018. arXiv:1802.03268.

  89. Liu H, Simonyan K, Yang Y. DARTS: differentiable architecture search. 2018. arXiv:1806.09055.

  90. Luo R, Tian F, Qin T, Liu T-Y. Neural architecture optimization. 2018. arXiv:1808.07233.

  91. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284:574–82.

    Article  Google Scholar 

  92. Dietterich TG. Ensemble methods in machine learning. International workshop on multiple classifier systems. Springer; 2000. p. 1–15.

  93. Prevedello LM, Erdal BS, Ryu JL, et al. Automated critical test findings identification and online notification system using artificial intelligence in imaging. Radiology. 2017;285:923–31.

    Article  Google Scholar 

  94. Kim JR, Shim WH, Yoon HM, et al. Computerized bone age estimation using deep learning based program: evaluation of the accuracy and efficiency. Am J Roentgenol. 2017;209:1374–80.

    Article  Google Scholar 

  95. Yasaka K, Akai H, Abe O, Kiryu S. Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: a preliminary study. Radiology. 2017;286:887–96.

    Article  Google Scholar 

  96. Larson DB, Chen MC, Lungren MP, Halabi SS, Stence NV, Langlotz CP. Performance of a deep-learning neural network model in assessing skeletal maturity on pediatric hand radiographs. Radiology. 2017;287:313–22.

    Article  Google Scholar 

  97. Yasaka K, Akai H, Kunimatsu A, Abe O, Kiryu S. Deep learning for staging liver fibrosis on CT: a pilot study. Eur Radiol. 2018;28:440–51.

    Article  Google Scholar 

  98. Yasaka K, Akai H, Kunimatsu A, Abe O, Kiryu S. Liver fibrosis: deep convolutional neural network for staging by using gadoxetic acid–enhanced hepatobiliary phase MR images. Radiology. 2017;287:146–55.

    Article  Google Scholar 

  99. Noguchi T, Higa D, Asada T, et al. Artificial intelligence using neural network architecture for radiology (AINNAR): classification of MR imaging sequences. Jpn J Radiol. 2018;36(12):691–7.

    Article  Google Scholar 

  100. England JR, Gross JS, White EA, Patel DB, England JT, Cheng PM. Detection of traumatic pediatric elbow joint effusion using a deep convolutional neural network. Am J Roentgenol. 2018;211(6):1361–8.

    Article  Google Scholar 

  101. Kim Y, Lee KJ, Sunwoo L, et al. Deep learning in diagnosis of maxillary sinusitis using conventional radiography. Invest Radiol. 2018. https://doi.org/10.1097/RLI.0000000000000503

  102. Lehman CD, Yala A, Schuster T, et al. Mammographic breast density assessment using deep learning: clinical implementation. Radiology. 2018:180694.

  103. Ueda D, Yamamoto A, Nishimori M, et al. Deep learning for MR angiography: automated detection of cerebral aneurysms. Radiology. 2018:180901.

  104. Chang P, Kuoy E, Grinband J, et al. Hybrid 3D/2D convolutional neural network for hemorrhage evaluation on head CT. Am J Neuroradiol. 2018;39(9):1609–16.

    Article  CAS  Google Scholar 

  105. Becker AS, Marcon M, Ghafoor S, Wurnig MC, Frauenfelder T, Boss A. Deep learning in mammography: diagnostic accuracy of a multipurpose image analysis software in the detection of breast cancer. Invest Radiol. 2017;52:434–40.

    Article  Google Scholar 

  106. Norman B, Pedoia V, Majumdar S. Use of 2D U-Net convolutional neural networks for automated cartilage and meniscus segmentation of knee MR imaging data to determine relaxometry and morphometry. Radiology. 2018;288(1):177–85.

    Article  Google Scholar 

  107. Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: International conference on Medical image computing and computer-assisted intervention. Springer; 2015. p. 234–41.

  108. Perkuhn M, Stavrinou P, Thiele F, et al. Clinical evaluation of a multiparametric deep learning model for glioblastoma segmentation using heterogeneous magnetic resonance imaging data from clinical routine. Investig Radiol. 2018;53(11):647–54.

    Google Scholar 

  109. Laukamp KR, Thiele F, Shakirin G, et al. Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI. Eur Radiol. 2018:1-9.

  110. Kamnitsas K, Ledig C, Newcombe VF, et al. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal. 2017;36:61–78.

    Article  Google Scholar 

  111. Montoya J, Li Y, Strother C, Chen G-H. 3D deep learning angiography (3D-DLA) from C-arm conebeam CT. Am J Neuroradiol. 2018;39:916–22.

    Article  CAS  Google Scholar 

  112. Tao Q, Yan W, Wang Y, et al. Deep learning–based method for fully automatic quantification of left ventricle function from cine MR images: a multivendor, multicenter study. Radiology. 2018:180513.

  113. Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB. Deep learning MR imaging–based attenuation correction for PET/MR imaging. Radiology. 2017;286:676–84.

    Article  Google Scholar 

  114. Kim KH, Choi SH, Park S-H. Improving arterial spin labeling by using deep learning. Radiology. 2017;287:658–66.

    Article  Google Scholar 

  115. Ahn SY, Chae KJ, Goo JM. The potential role of grid-like software in bedside chest radiography in improving image quality and dose reduction: an observer preference study. Korean J Radiol. 2018;19:526–33.

    Article  Google Scholar 

  116. Chen F, Taviani V, Malkiel I, et al. Variable-density single-shot fast spin-echo MRI with deep learning reconstruction by using variational networks. Radiology. 2018;289(2):180445.

    Article  Google Scholar 

  117. Kobler E, Klatzer T, Hammernik K, Pock T. Variational networks: connecting variational methods and deep learning. In: German conference on pattern recognition. Springer; 2017. p. 281–93.

  118. Jiang D, Dou W, Vosters L, Xu X, Sun Y, Tan T. Denoising of 3D magnetic resonance images with multi-channel residual learning of convolutional neural network. Jpn J Radiol. 2018;36:566–74.

    Article  Google Scholar 

  119. Zhang K, Zuo W, Chen Y, Meng D, Zhang L. Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Trans Image Process. 2017;26:3142–55.

    Article  Google Scholar 

  120. Chen MC, Ball RL, Yang L, et al. Deep learning to classify radiology free-text reports. Radiology. 2017;286:845–52.

    Article  Google Scholar 

  121. Kim Y. Convolutional neural networks for sentence classification. 2014. arXiv:1408.5882.

  122. Zech J, Pain M, Titano J, et al. Natural language–based machine learning models for the annotation of clinical radiology reports. Radiology. 2018;287:570–80.

    Article  Google Scholar 

  123. Chang P, Grinband J, Weinberg B, et al. Deep-learning convolutional neural networks accurately classify genetic mutations in gliomas. Am J Neuroradiol. 2018;39(7):1201–7.

    Article  CAS  Google Scholar 

  124. Liu F, Zhou Z, Samsonov A, et al. Deep learning approach for evaluating knee MR images: achieving high diagnostic performance for cartilage lesion detection. Radiology. 2018;289(1):160–9.

    Article  Google Scholar 

  125. Choi KJ, Jang JK, Lee SS, et al. Development and validation of a deep learning system for staging liver fibrosis by using contrast agent–enhanced CT images in the liver. Radiology. 2018;289(3):688–97.

    Article  Google Scholar 

  126. Kim Y-H, Reddy B, Yun S, Seo C. Nemo: Neuro-evolution with multiobjective optimization of deep neural network for speed and accuracy. In: ICML.

  127. Nam JG, Park S, Hwang EJ, et al. Development and validation of deep learning–based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology. 2018:180237.

  128. Liang S, Tang F, Huang X, et al. Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning. Eur Radiol. 2018. https://doi.org/10.1007/s00330-018-5748-9

  129. Park SH, Han K. Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. Radiology. 2018;286:800–9.

    Article  Google Scholar 

  130. Rajkomar A, Oren E, Chen K, et al. Scalable and accurate deep learning with electronic health records. npj Dig Med. 2018;1:18.

    Article  Google Scholar 

  131. Japanese goverment to make inclusive rules for use of AI in medical practice. Nikkei. 2018.

  132. Nakajima Y, Yamada K, Imamura K, Kobayashi K. Radiologist supply and workload: international comparison–Working Group of Japanese College of Radiology. Radiat Med. 2008;26:455–65.

    Article  Google Scholar 

  133. Nishie A, Kakihara D, Nojo T, et al. Current radiologist workload and the shortages in Japan: how many full-time radiologists are required? Jpn J Radiol. 2015;33:266–72.

    Article  Google Scholar 

  134. Kumamaru KK, Machitori A, Koba R, Ijichi S, Nakajima Y, Aoki S. Global and Japanese regional variations in radiologist potential workload for computed tomography and magnetic resonance imaging examinations. Jpn J Radiol. 2018;36:273–81.

    Article  Google Scholar 

Download references

Funding

Another research about a deep learning for mammography received 10,000$ in 2017 from Wellness Open Living Labs. LLC, Osaka, Japan.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daiju Ueda.

Ethics declarations

Conflict of interest

Daiju Ueda received a research grant from Wellness Open Living Labs, LLC.

Ethical considerations

This article does not contain any research involving human participants or animals performed by any of the authors.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ueda, D., Shimazaki, A. & Miki, Y. Technical and clinical overview of deep learning in radiology. Jpn J Radiol 37, 15–33 (2019). https://doi.org/10.1007/s11604-018-0795-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11604-018-0795-3

Keywords

Navigation