Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning
- 508 Downloads
Accurate detection and segmentation of organs at risks (OARs) in CT image is the key step for efficient planning of radiation therapy for nasopharyngeal carcinoma (NPC) treatment. We develop a fully automated deep-learning-based method (termed organs-at-risk detection and segmentation network (ODS net)) on CT images and investigate ODS net performance in automated detection and segmentation of OARs.
The ODS net consists of two convolutional neural networks (CNNs). The first CNN proposes organ bounding boxes along with their scores, and then a second CNN utilizes the proposed bounding boxes to predict segmentation masks for each organ. A total of 185 subjects were included in this study for statistical comparison. Sensitivity and specificity were performed to determine the performance of the detection and the Dice coefficient was used to quantitatively measure the overlap between automated segmentation results and manual segmentation. Paired samples t tests and analysis of variance were employed for statistical analysis.
ODS net provides an accurate detection result with a sensitivity of 0.997 to 1 for most organs and a specificity of 0.983 to 0.999. Furthermore, segmentation results from ODS net correlated strongly with manual segmentation with a Dice coefficient of more than 0.85 in most organs. A significantly higher Dice coefficient for all organs together (p = 0.0003 < 0.01) was obtained in ODS net (0.861 ± 0.07) than in fully convolutional neural network (FCN) (0.8 ± 0.07). The Dice coefficients of each OAR did not differ significantly between different T-staging patients.
The ODS net yielded accurate automated detection and segmentation of OARs in CT images and thereby may improve and facilitate radiotherapy planning for NPC.
• A fully automated deep-learning method (ODS net) is developed to detect and segment OARs in clinical CT images.
• This deep-learning-based framework produces reliable detection and segmentation results and thus can be useful in delineating OARs in NPC radiotherapy planning.
• This deep-learning-based framework delineating a single image requires approximately 30 s, which is suitable for clinical workflows.
KeywordsImage processing Tomography, x-ray computed Head and neck neoplasms Organs at risk Radiotherapy
Convolutional neural network
Fully convolutional neural network
Graphics processing unit
Organs at risk
- ODS net
Organs-at-risk detection and segmentation network
The author(s) would like to thank the reviewers for their fruitful comments.
This study has received funding by the National Natural Science Foundation of China under Grant No. 61671230 and No.31271067, the Science and Technology Program of Guangdong Province under Grant No. 2017A020211012, the Guangdong Provincial Key Laboratory of Medical Image Processing under Grant No.2014B030301042, and the Science and Technology Program of Guangzhou under Grant No. 201607010097.
Compliance with ethical standards
The scientific guarantor of this publication is Yu Zhang.
Conflict of interest
The authors declare that they have no conflict of interest.
Statistics and biometry
No complex statistical methods were necessary for this paper.
Written informed consent was waived by the Institutional Review Board.
Institutional Review Board approval was obtained.
• performed at one institution
- 7.Han X, Hibbard LS, O’Connell NP, Willcut V (2010) Automatic segmentation of parotids in head and neck CT images using multiatlas fusion. In: van Ginneken B, Murphy K, Heimann T, Pekar V, Deng X (eds.) Med Image Analysis for the Clinic:A Grand Challenge, Beijing, 297–304Google Scholar
- 8.Han X, Hibbard LS, O’Connell NP (2009) Automatic segmentation of head and neck CT images by GPU-accelerated multi-atlas fusion. On 3D Segmentation. Retrieved from http://www.midasjournal.org/handle/10380/3111
- 10.Fritscher KD, Peroni M, Zaffino P, Spadea MF, Schubert R, Sharp G (2014) Automatic segmentation of head and neck CT images for radiotherapy treatment planning using multiple atlases, statistical appearance models, and geodesic active contours. Med Phys 41(5):051910Google Scholar
- 16.Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB (2017) Deep learning MR imaging-based attenuation correction for PET/MR imaging. Radiology 286(2):170700Google Scholar
- 17.Akram SU, Kannala J, Eklund L, Heikkilä J (2016) Cell Segmentation proposal network for microscopy image analysis. IEEE International Conference on Image ProcessingGoogle Scholar
- 22.Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. ArXiv: 1409.1556 [cs.CV]Google Scholar
- 23.Zhang L, Lin L, Liang X, He K (2016) Is faster R-CNN doing well for pedestrian detection? In: Leibe B, Matas J, Sebe N, Welling M (eds) Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, vol 9906. Springer, Cham pp 443–457Google Scholar
- 24.Bottou L (2010) Large-scale machine learning with stochastic gradient descent. In: Lechevallier Y, Saporta G (eds) Proceedings of COMPSTAT'2010. Physica-Verlag HD 177–186Google Scholar
- 25.Jia Y, Shelhamer E, Donahue J et al (2014) Caffe: convolutional architecture for fast feature embedding. arXiv:1408.5093v1 [cs.CV]Google Scholar
- 26.Agresti A, Coull BA (1998) Approximate is better than “exact” for interval estimation of binomial proportions. Am Stat 52(2):119–126Google Scholar