Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naïve proliferative diabetic retinopathy
- 788 Downloads
We investigated using ultrawide-field fundus images with a deep convolutional neural network (DCNN), which is a machine learning technology, to detect treatment-naïve proliferative diabetic retinopathy (PDR).
We conducted training with the DCNN using 378 photographic images (132 PDR and 246 non-PDR) and constructed a deep learning model. The area under the curve (AUC), sensitivity, and specificity were examined.
The constructed deep learning model demonstrated a high sensitivity of 94.7% and a high specificity of 97.2%, with an AUC of 0.969.
Our findings suggested that PDR could be diagnosed using wide-angle camera images and deep learning.
KeywordsUltrawide-field fundus ophthalmoscopy Proliferative diabetic retinopathy Deep learning Deep convolutional neural network
According to a World Health Organization report, the number of diabetic patients worldwide has increased from 108 million in 1980 to 422 million in 2014, and the prevalence of diabetes worldwide in adults (> 18 years of age) has increased from 4.7% in 1980 to 8.5% in 2014 . Voigt et al. reported that 25.8% of diabetic patients have complications of retinopathy (nonproliferative 20.2%; proliferative 4.7%; unclassified 0.7%; blindness 0.1%) . Early treatment, compared to deferral of photocoagulation, was associated with a small reduction in the incidence of severe visual loss ; however, undergoing fundus examination by an ophthalmologist is unrealistic and costly for diabetic patients. Furthermore, there is a large cost burden associated with diabetic retinopathy, and the financial impact may be even more severe for many patients with this complication who live in developing countries .
Recently, image processing technology using a deep learning application, which is a machine learning algorithm, has attracted attention because of its accuracy. Using this technology for medical imaging is being actively studied [5, 6, 7]. In fact, image diagnosis has already been reported in ophthalmology [8, 9, 10, 11]. In addition, the advent of wide-angle fundus cameras, such as the ultrawide-field scanning laser ophthalmoscope (Optos 200Tx; Optos plc, Dunfermline, UK) known as Optos, has made it possible to simply and noninvasively capture a wide range of fundus photographs [12, 13, 14]. In the present study, we assessed and determined the accuracy of ultrawide-field fundus images with deep learning to detect the presence of treatment-naïve proliferative diabetic retinopathy (PDR).
The procedures in the present study conformed to the tenets of the Declaration of Helsinki. Informed consent was obtained from the subjects after they understood the study’s nature and possible consequences.
In this study, we used K-fold cross-validation . Briefly, the image data were divided into K groups; K-1 groups were used as training data, and one group was used as validation data. This process was repeated until each group became a validation dataset. In the present study, we divided the data into nine groups. The images of the training dataset were augmented by adjusting for brightness, gamma correction, histogram equalization, noise addition, and inversion; augmenting the images increased the amount of learning data 18 times. The deep convolutional neural network (DCNN) model, as detailed below, was created and trained with preprocessed image data.
Deep learning model and training the model
The VGG-16 comprised five blocks and three fully connected layers. Each block comprised some convolutional layers, followed by a max-pooling layer to decrease position sensitivity and improve generic recognition . After flattening the output of block 5, there were two fully connected layers; the first removed the spatial information from the extracted features, and the last was a classification layer that used feature vectors of the target images acquired from the previous layers, together with the softmax function for binary classification. To improve the generalization performance, we conducted dropout processing; therefore, masking was performed with a probability of 25% for the first fully connected layer. Fine-tuning was used to increase the learning speed and optimize performance even with less data [20, 21]. We used parameters from ImageNet: blocks 1–4 were fixed, and block 5 and the fully connected layers were trained. The weights of block 5 and the fully connected layer that we were training were updated using the optimization momentum SGD algorithm (learning coefficient = 0.001, inertial term = 0.9), which is a stochastic gradient descent method [22, 23]. Of the 40 deep learning models obtained from 40 learning cycles, the one with the highest correct answer rate for the test data was selected as the DL model. To build and evaluate the model, Keras (https://keras.io/ja/) was run on TensorFlow (https://www.tensorflow.org/), which was written in Python.
Receiver operating characteristic (ROC) curves were created based on the deep learning models’ abilities to discriminate between PDR and non-DR images. These curves were evaluated using area under the curve (AUC), sensitivity, and specificity.
Student’s t test was used to compare age, whereas Fisher’s exact text was used to determine the ratios of men to women and right to left eye images. The 95% confidence intervals (CIs) of the AUCs were obtained, as follows. Images that were judged to exceed a threshold were defined as positive for PDR, and an ROC curve was created. We created nine models and nine ROC curves. For AUC, a 95% CI was obtained by assuming a normal distribution, using the means and standard deviations of the nine ROC curves. For sensitivity and specificity, we used the optimal cutoff values, which were the points at which both sensitivity and specificity were 100% in each ROC curve . The ROC curve was calculated using scikit-learn, and the CIs for sensitivity and specificity were determined using SciPy. The other statistical analyses were performed using SPSS version 22 software (IBM, Armonk, New York, USA). A two-sided P value of < 0.05 was considered statistically significant.
The Optos image datasets analyzed during the present study are available from the corresponding author upon request.
Number of images (patients)
55.3 ± 12.5
55.2 ± 13.9
Performance of the DCNN
In this study, we investigated the deep learning method’s efficacy in identifying referable treatment-naïve PDR based on 132 fundus photographs. The deep learning algorithm showed high sensitivity of 94.7%, high specificity of 97.2%, and AUC of 0.969 for the detection of treatment-naïve PDR. We focused on treatment-naïve PDR only, because it can need immediate treatment. Even when the diagnosis was based on color photographs only, the results were comparable to those made based on color fundus images and FA assessment by retinal specialists.
In the past, deep learning was examined at all stages of diabetic retinopathy, with good results were obtained [8, 26, 27, 28, 29]; however, the fundus camera of the rear pole was used. In this study, we used a wide-angle ocular fundus camera, because diabetic retinopathy is an important disease that can affect both the posterior region and periphery of the retina. The ETDRS 7  defined lesions predominantly around the standard field as predominantly peripheral lesions (PPLs). The extent of these PPLs is associated with retinopathy progression [31, 32]. Therefore, the type of camera used is important.
The drawback of the present study was that we did not examine diabetic maculopathy, which causes vision disturbances and can also be diagnosed using deep learning, as reported by Gulshan et al. . The algorithms’ ability to detect vision-threatening diabetic retinopathy is important to evaluate; the software’s sensitivity is especially important to determine. Originally, deep learning required tens of thousands of data to investigate the presence or absence of a diagnosis; however, the number of treatment-naïve PDR cases had been limited. Therefore, further studies are needed to assess whether diabetic retinopathy can be staged appropriately.
At present, making a final diagnosis based on images alone might not be accurate and prudent. We believe that image diagnosis should only be used to confirm a doctor’s diagnosis. However, in developing countries where the number of physicians may also be limited, remote image diagnosis may be especially useful. Kanjee et al. reported remote diagnosis was cost-effective, noting further reduction in medical expenses when automatic diagnosis was available . To address the most urgent medical problems in the world in an efficient, timely, and cost-effective manner, all available resources are needed. Therefore, introducing artificial intelligence in the medical field is timely, welcomed, and needed.
Although combining DCNN and Optos images can provide better results, it is not particularly superior to medical examination. Personal and actual examinations by ophthalmologists remain indispensable for definite diagnosis. Furthermore, both conventional angiography and optical coherence tomography angiography, performed by a retinal specialist, are essential to confirm a qualitative diagnosis, assess treatment effects, and provide follow-up observations.
PDR could be diagnosed using an approach that involves wide-angle camera images and deep learning.
We thank Masayuki Miki and the orthoptists at Tsukazaki Hospital for their support in data collection.
Compliance with ethical standards
Conflicts of interest
The authors declare that they have no competing financial interests.
Human and animal rights
Approval to perform this study was obtained from the institutional review boards of Saneikai Tsukazaki Hospital and Tokushima University Hospital.
- 1.World Health Organization (2017) Media centre diabetes fact sheet. http://www.who.int/mediacentre/factsheets/fs312/en/. Accessed 30 Oct 2018
- 8.Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J, Kim R, Raman R, Nelson PC, Mega JL, Webster DR (2016) Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316:2402–2410CrossRefGoogle Scholar
- 15.Kohavi R (1995) A study of cross-validation and bootstrap for accuracy estimation and model selection. Proc Int Joint Conf Artif Intell 2:1137–1145Google Scholar
- 16.Deng J, Dong W, Socher R (2009) Imagenet: a large-scale hierarchical image database. Comput Vis Pattern Recognit 9:248–255Google Scholar
- 17.Lee CY, Xie S, Gallagher P, Zhang Z, Tu Z (2015) Deeply-supervised nets. AISTATS 2:562–570Google Scholar
- 19.Scherer D, Müller A, Behnke S (2010) Evaluation of pooling operations in convolutional architectures for object recognition. In: Proceedings of international conference on Artificial neural networks—ICANN, pp 92–101Google Scholar
- 21.Redmon J, Divvala S, Girshick R, Farhadi F (2015) You only look once: unified real-time object detection. arXiv:1506.02640
- 22.Nesterov Y (1983) A method for unconstrained convex minimization problem with the rate of convergence O (1/k^2). Doklady AN USSR 269:543–547Google Scholar
- 25.Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2016) Grad-CAM: visual explanations from deep networks via gradient-based localization. arXiv:1610.02391v3
- 29.Ting DS, Cheung CY, Lim G, Tan GS, Quang ND, Gan A, Hamzah H, Garcia-Franco R, San Yeo IY, Lee SY, Wong EY, Sabanayagam C, Baskaran M, Ibrahim F, Tan NC, Finkelstein EA, Lamoureux EL, Wong IY, Bressler NM, Sivaprasad S, Varma R, Jonas JB, He MG, Cheng CY, Cheung GCM, Aung T, Hsu W, Lee ML, Wong TY (2017) Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 318:2211–2223CrossRefGoogle Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.