Advertisement

Diagnosing of Diabetic Retinopathy with Image Dehazing and Capsule Network

  • Utku KoseEmail author
  • Omer Deperlioglu
  • Jafar Alzubi
  • Bogdan Patrut
Chapter
  • 115 Downloads
Part of the Studies in Computational Intelligence book series (SCI, volume 909)

Abstract

As it was discussed before in Chap.  4, the disease of diabetic retinopathy (DR) ensure terrible results such as blindness, it has been a remarkable medical problem examined recently. Here, especially retinal pathologies can be the biggest problem for millions of blindness cases seen world-wide [1]. When all the cases of blindness are examined in detail, it was reported that there are around 2 million diabetic retinopathy cases causing the blindness so that early diagnosis has taken many steps away in terms of having the highest priority in eliminating or at least slowing down disease factors (causing blindness) and so that reducing the rates of blindness at the final [2, 3].

As it was discussed before in Chap.  4, the disease of diabetic retinopathy (DR) ensure terrible results such as blindness, it has been a remarkable medical problem examined recently. Here, especially retinal pathologies can be the biggest problem for millions of blindness cases seen world-wide [1]. When all the cases of blindness are examined in detail, it was reported that there are around 2 million diabetic retinopathy cases causing the blindness so that early diagnosis has taken many steps away in terms of having the highest priority in eliminating or at least slowing down disease factors (causing blindness) and so that reducing the rates of blindness at the final [2, 3].

Considering Fig. 9.1, which is a more detailed view than the one provided in Chap.  4, it possible to see state of the retina with the components such as blood vessels, macula, and the optic disc. As changes over these components are signs of the DR, the disease can be examined in two stages such non-proliferative DR (NPDR) where diabetes causes damages over the blood vessels so that the blood affects function of the retina negatively, and the proliferative DR (PDR) where normal blood vessels grow over the retina and may cause to the blindness at the end. Here, NPDR can lead to different signs of retinopathy called as micro aneurysms (MAs), hard and soft exudates (EXs), hemorrhages (HMs), and the inter-retinal microvascular abnormalities (IRMA) [1, 4]. Gathering all signs-states into one hand, it is possible to talk about five-type DR, as show in Fig. 9.1 [5].
Fig. 9.1

Eye retina states in the context of DR: a Normal—b Mild NPDR—c Moderate NPDR—d Severe NPDR—e Prolific DR—f Macular edema [5]

As recalled the diagnosis of DR, it is possible to briefly express some about alternative research works. Sreejini and Govindan have used optic disc elimination, exudate segmentation, fovea and macular region localization and then the classification of DR. In detail, they employed image processing, an intelligent optimization technique: Particle Swarm Optimization (PSO), and the Fuzzy C-Means Clustering [6]. Seoud and his colleagues proposed a grading system, which can automatically perform a decision-making approach for DR. In their study, they identified a red lesion to form a probability map regarding the lesion and then get a classification approach using 35 characteristics combining size as well as probability information [7]. Acharya et al. used a support vector machine model for mass screening of diabetic retinopathy, which was done automatically via tissue properties [8]. In another study, Safitri and Juniati ensured an early detection of micro aneurysms (MA) by removing candidate sites for MAs within the retinal image, and then classifying the related regions with a hybrid approach including Gaussian mixing model and a model of support vector machine [9]. Savarkar and colleagues proposed a method of detecting MAs by analyzing density values along discrete segments of different directions centered in the candidate pixel. In this method, the peak values were determined first and then the feature set was determined and classified [10]. Finally, Akremetal et al. have a research of diagnosing DR, done with the fractal analysis, and the k-nearest neighbor (kNN) techniques [11].

In this chapter, the diagnosis of DR was solved that time with the Capsule Network, which is called also as CapsNet briefly. CapsNet is actually an improved version of the convolutional neural networks (CNN), which is a widely used deep learning technique, as employing important advantages of the deep learning [12]. In addition to the solution in Chap.  4, Deperlioglu and Kose used before a practical image processing method to improve retinal fundus images including HSV, V transformation algorithm and histogram equalization techniques for better classifying the images (diagnosis) with the CNN [13]. An alternative work with the CNN was also done in [14], by employing histogram equalization (HE) as well as the contrast limited adaptive histogram equalization (CLAHE) for providing better data for the CNN. Also, there is another alternative use of CNN and development of a diagnosis/decision support system with no user input, as done by Pratt et al. in [15]. Here, the question of if CapsNet can improve the results more against especially CNN was tried to be answered with also alternative use of image processing with a simple technique of image dehazing accordingly.

9.1 Materials and Method

In the study, the diagnosis of the DR was done with two-step approach including image processing and then classification with the CapsNet. For the training/tests, the Kaggle Diabetic Retinopathy Detection database was chosen as the target data in the study. The related stages in detail for the DR diagnosis are represented in Fig. 9.2.
Fig. 9.2

The stages in detail for the diagnosis of the diabetic retinopathy

9.1.1 Kaggle Diabetic Retinopathy Database for Diagnosis

The database of DR provided in the Kaggle platform is briefly a public set including over 80,000 colorful fundus images [16]. The first data set consisted of 88,702 colorful fundus images gathered from a total of 44,351 patients. Images were collected from several primary found centers in California and elsewhere with various digital fundus cameras. As all files in jpeg format, the definitions are respectively 433 × 289 pixels to 5184 × 3456 pixels (as the median definition: 3888 × 2592 pixels), and the related images were uploaded to the EyePACS, which is a DR screening platform [17]. For each eye, the severity of DR was rated by an expert on the ETDRS [18] scale. These are respectively ‘absence-of DR’, ‘mild non-proliferative DR (NPDR)’, ‘moderate NPDR’, ‘severe NPDR’ and ‘proliferative DR (PDR) [19].

9.1.2 Image Processing

In this study, a simple and fast image enhancement method was used which gives close performance to mixed methods. This method consists of dark-channel prior based image dehazing, and also image guided filter.

The dark-channel prior based is a type of statistic for outdoors-free images image dehazing. It uses the approach of an important observation/most local patches on outdoor airless images include some pixels with a very low density in at least one-color channel. Just before, running that in the haze imaging model, the thickness of the haze and also obtaining of a high-quality haze-free image are possible to be directly predicted. The dark-channel prior here is just a simple but powerful enough prior, in order to be used for removal of single image haze. As a result of combining the haze imaging model with the prior, the removal of single image haze becomes more effective and in an easier form [20].

After dehazing, a guided filter is used for smoothing the colors and sharping the edges. The guided filter formed as from a local linear model ensures calculation of the filtering output, thanks to the contents of a grid image, which may be the input image itself or another different image. Here, it is possible to use the guided filter as an edge protector straightening operator, such as the popular bilateral filter, but has better behavior as close to the related edges. The guided filter can also transfer the structures of the orientation image to the filtering output so that it enables new filtering applications such as guided feathering and the dehazing [21].

9.1.3 Classification

The classification approach for diagnosing DR in this study was done with the Capsule Network (CapsNet), which is an effective, recent deep learning techniques. The CapsNet has been applied over the related Kaggle database accordingly, after the image processing phase.

CapsNet is a recent deep learning architecture employing capsules, which are groups of artificial neurons as the data processing components. CapsNet has been developed as a solution for the problem of discarding some information (i.e. position and the pose of the target object) because of the data routing process seen in convolutional neural networks (CNN). In a typical CapsNet, each capsule can determine a single component within the target object, and eventually, all capsules form the whole structure of the object collaboratively [22, 23, 24]. As a typical improvement of the CNN, CapsNet models include multi-layers. Figure 9.3 represents a typical structure of the CapsNet [25].
Fig. 9.3

A typical structure of the capsule network (CapsNet) [25]

9.1.4 Evaluation of the Diagnosis

As used in different medical applications including especially diagnosis, the following evaluation metrics were used in this study, for evaluating the developed solution [26]:
$${\text{Accuracy}} = \left( {{\text{TrP}} + {\text{TrN}}} \right)/{\text{N}}$$
(1)
$${\text{Sensitivity}} = {\text{TrP}}/{\text{P}}$$
(2)
$${\text{Specificity}} = {\text{TrN}}/{\text{N}}$$
(3)
$${\text{Precision}} = {\text{TrP}}/\left( {{\text{TrP}} + {\text{FrP}}} \right)$$
(4)
$${\text{Recall}} = {\text{Sensitivity}}$$
(5)
$$f\_{\text{score}} = 2 * \left[ {\left( {{\text{Precision}} * {\text{Recall}}} \right)/\left( {{\text{Precision}} + {\text{Recall}}} \right)} \right]$$
(6)
$${\text{gmean}} = {\text{sqrt}}\left( {{\text{Sensitivity}} * {\text{Specificity}}} \right)$$
(7)

In the context of the related equations, TrP and FrP mean respectively as the total number of true positive, and the total number of false positive regarding the performed diagnosis. Additionally, TrN and FrN correspond to the total number of true negative, and the total number of false negatives seen within the diagnosis N is for the total number of the data/samples, as meaning also sum of positives (P), and negatives (N). For a specific classification technique, precision of diagnosing correctly is associated with the ratio of accuracy. On the other hand, the Sensitivity is for defining the extent to which the classifier defines target class formation correctly, and the Specificity is for the separation capability for target classes [27, 28].

9.2 Application and Evaluation

MATLAB r2017a software was used in all image processing and the classification/diagnosis processes. In the study, 200 color fundus digital images in the Kaggle database were used to evaluate the performance of the image processing supporting CapsNet model. At this point, 200 images including 157 no DR (0), 10 mild NPDR (1), 27 moderate NPDR (2), 4 severe NPDR (3), and 2 proliferative DR (PDR) (4) were selected and used accordingly. Consequently, the output classes of the classification are five such as 0, 1, 2, 3, and 4.

First, image enhancement to these images was performed. Figure 9.4 shows the images after the image enhancement steps for 46_left.jpeg. In the image processing studies, entropy value and mean value were examined in order to evaluate the obtained results. For example, for the “46_left.jpeg” image, the entropy value of the original image measured is 2.2036. The Entropy value of the improved image increased to 2.6634. Similarly, the mean value regarding the original image is 204.2431. The mean value of the improved image has increased to 209.6221. Since higher entropy and mean values mean better visualization, there is improvement in images.
Fig. 9.4

The images after the image processing steps

In order to better understand the improvements in the image, only original images and improved images are shown in Fig. 9.5.
Fig. 9.5

Original images and improved images

In the context of the DR diagnosis process, the obtained colorful fundus images were classified by the CapsNet model. In order to design a model for diagnosing the DR, the CapsNet here consisted of 5 layers at total. These layers are respectively image input layer (with parameters of [195 322 3]), convolutional layer (3 × 3 × 256), primary caps (3 × 3 × (1 × 256)), fundus caps ((7 × 7) × 256), and the output layer (classification layer).

In the classification/diagnosis, 200 images from the Kaggle database were used while 80% of these images were for training, and the remaining 20% was for the test. For randomly selected training and test data, the classification process was repeated 20 times. The obtained findings in terms of the lowest-average-highest values for the performance evaluation metrics are given in Table 9.1.
Table 9.1

The lowest, the average and the highest values of performance metrics

Criteria

Lowest

Average

Highest

Accuracy

0.9225

0.9484

0.9650

Sensitivity

0.8150

0.8468

0.8800

Specificity

0.9650

0.9823

0.9833

Precision

0.8950

0.8468

0.8800

Recall

0.8150

0.8468

0.8800

F-score

0.8150

0.8468

0.8800

gmean

0.8143

0.8287

0.8406

As it is seen from the obtained findings the combination of the image dehazing and the CapsNet model has high values in terms of different evaluation metrics. That can be indicated that the diagnosis solution has a very high sampling, selection and estimation ability.

9.3 Results

In this chapter, it is aimed to explain an easy method instead of creating DR diagnostic methods by not using different heavy-detailed image processing methods and artificial intelligence techniques. In this context, easy diagnosis of diabetic retinopathy by defogging of the fundus image using a dark canal priority method and classification using capsule networks (CapsNet) have been proposed. In order to test the performance of the proposed method, an application was created with Kaggle Diabetic Retinopathy diagnosis database. After image processing, a classification study was performed with a CapsNet model. A total of 20 trials were performed and the average values of the criteria used in performance evaluation were taken. Obtained results show that the developed model a very high sampling, selection and estimation ability. Thus, the proposed method is very effective and efficient in the diagnosis of DR from retinal fundus images. For the future works different image processing method can be added or different variations of the CapsNet can also be implemented.

9.4 Summary

The humankind has always been dealing with serious disease needing early diagnosis for better treatment results at end. As the diabetic retinopathy (DR) has the potential of causing blindness, the associated literature of artificial intelligence has given a remarkable emphasis for designing diagnosis solutions, which has early diagnosis mechanism. In order to achieve that, image processing and machine/deep learning all have great synergy to develop innovative and robust solutions. As similar, this chapter provided an alternative solution combining image dehazing and the Capsule Network (CapsNet). The solution provided here is just another example of diagnosing DR, as explained before in Chap.  4, too. It can be clearly seen that there are open ways to derive alternative solutions for trying to improve obtained results. The solutions provided in both Chap.  4 and this chapter can also be applied for diagnosis of alternative diseases, which can be diagnosed from image data.

As the chapters past so far provided a general collection of medical decision support rising over diagnosis processes, there are still many more alternative research ways to be done, by considering the wide variety of diseases. Although the humankind desires a disease-free world, that seems impossible because of the chaos in the life and the nature itself. However, the future will be still associated with further developments and alternative solution ideas in the intersection of artificial intelligence and the field of medical. By considering deep learning and the topic of medical decision support systems, the final Chap.  10 is devoted to a general discussion on what kind of future may be shaped thanks to strong relation between deep learning-oriented decision support solutions and the medical.

9.5 Further Learning

The readers interested in learning more about medical image analysis and the role of image processing techniques in this manner are referred to [29, 30, 31, 32, 33, 34, 35, 36].

Image processing and deep learning combinations are used in solving many different medical problems. As a very recent collection for understanding some about the current state, the readers can read [37, 38, 39, 40, 41, 42, 43].

As the world is currently (at the time of finalizing the book) dealing with the pandemic caused by the COVID-19 virus, there are also some recently published works focusing on image-based analyzes for coronavirus/COVID-19 diagnosis. Some of them are [44, 45, 46, 47, 48, 49].

References

  1. 1.
    M. U. Akram, S. Khalid, S. A. Khan. Identification and classification of microaneurysms for early detection of diabetic retinopathy. Pattern Recognit. 46(1): 107–116 (2013)Google Scholar
  2. 2.
    WHO. (2019). Blindness Causes. Online: http://www.who.int/blindness/causes/priority. Retrieved 28 Dec 2019
  3. 3.
    G. Quellec et al. Deep image mining for diabetic retinopathy screening. Med. Image Anal. 39: 178–193 (2017)Google Scholar
  4. 4.
    U. M. Akram et al. Detection and classification of retinal lesions for grading of diabetic retinopathy. Comput. Biol. Med. 45 (2014): 161–171. F. Chollet. (2015) Keras. Available: https://keras.io. Last accessed 2019/11/30
  5. 5.
    M. R. Mookiah, Krishnan et al. Computer-aided diagnosis of diabetic retinopathy: A review. Comput. Biol. Med. 43(12): 2136–2155Google Scholar
  6. 6.
    K.S. Sreejini, V.K. Govindan, Severity grading of DME from retina images: A combination of PSO and FCM with bayes classifier. Int. J. Comput. Applications. 81(16), 11–17 (2013)CrossRefGoogle Scholar
  7. 7.
    L. Seoud, J. Chelbi, F. Cheriet, Automatic Grading of Diabetic Retinopathy on a Public Database, ed. by X. Chen, M. K. Garvin, J. J. Liu, E. Trusso, Y. Xu. Proceedings of the Ophthalmic Medical Image Analysis Second International Workshop, OMIA 2015, Held in Conjunction with MICCAI. (Munich, Germany, October 9, 2015), pp. 97–104. Available from  https://doi.org/10.17077/omia.1032
  8. 8.
    U. R. Acharya, E. Y. K. Ng, J. H. Tan, An integrated index for the ident. J. Med. Syst. 36(3): 2011–2020.  https://doi.org/10.1007/s10916-011-9663-8
  9. 9.
    D. W. Safitri, D. Juniati, Classification of Diabetic Retinopathy Using Fractal Dimension Analysis of Eye Fundus Image. International Conference on Mathematics: Pure, Applied and Computation. AIP Conf. Proc. 1867, 020011-1–020011-11;  https://doi.org/10.1063/1.4994414. (2017)
  10. 10.
    S.P. Savarkar, N. Kalkar, S.L. Tade, Diabetic retinopathy using image processing detection, classification and analysis. Int. J. Adv. Comput. Res. 3(11), 585–588 (2013)Google Scholar
  11. 11.
    M.U. Akrametal, S. Khalid, S.A. Khan, Identification and classification of microaneurysms for early detection of diabetic retinopathy. Pattern Recogn. 46, 107–116 (2013)CrossRefGoogle Scholar
  12. 12.
    P. Chandrayan, Deep learning: Autoencoders fundamentals and types, https://codeburst.io/deep-learning-types-and-autoencoders-a40ee6754663. Son erişim 25 Ocak 2018
  13. 13.
    O. Deperlıoğlu, U. Köse, Diagnosis of Diabetic Retinopathy by Using Image Processing and Convolutional Neural Network. 2018 2nd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), (IEEE, 2018)Google Scholar
  14. 14.
    D. J. Hemanth, O. Deperlioglu, U. Kose, An enhanced diabetic retinopathy detection and classification approach using deep convolutional neural network. Neural Comput. Appl. (2019)  https://doi.org/10.1007/s00521-018-03974-0
  15. 15.
    H. Pratt et al. Convolutional neural networks for diabetic retinopathy. Procedia Comput. Sci. 90: 200–205 (2016)Google Scholar
  16. 16.
    Kaggle, Diabetic retinopathy database. Online: https://www.kaggle.com/c/diabetic-retinopathy-detection/data. Retrieved 12 Feb 2020
  17. 17.
    J. Cuadros, G. Bresnick, EyePACS: An adaptable telemedicine system for dia- betic retinopathy screening. J. Diabetes Sci. Technol. 3(3), 509–516 (2009)CrossRefGoogle Scholar
  18. 18.
    C. P. Wilkinson, F. L. Ferris, R. E. Klein, P. P. Lee, C. D. Agardh, M. Davis, D. Dills, A. Kampik, R. Pararajasegaram, J. T. Verdaguer, Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology 110(9): 1677–1682 (2003).  https://doi.org/10.1016/s0161-6420(03)00475-5
  19. 19.
    G. Quellec et al., Deep Image Min. Diabet. Retin. Screen. Med. Image Anal. 39, 178–193 (2017)Google Scholar
  20. 20.
    K. He, J. Sun, X. Tang, Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010)Google Scholar
  21. 21.
    K. He, J. Sun, X. Tang, Guided image filtering (European Conference on Computer Vision, Springer, Berlin, Heidelberg, 2010)CrossRefGoogle Scholar
  22. 22.
    S. Sabour, N. Frosst, G. E. Hinton, Dynamic routing between capsules. In Advances in neural information processing systems, (2017), pp. 3856–3866Google Scholar
  23. 23.
    A. Mobiny, H. Van Nguyen, Fast Capsnet for Lung Cancer Screening. International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, Cham, 2018), pp. 741–749Google Scholar
  24. 24.
    H. Chao, L. Dong, Y. Liu, B. Lu, Emotion recognition from multiband EEG signals using CapsNet. Sensors 19(9), 2212 (2019)CrossRefGoogle Scholar
  25. 25.
    W. Zhang, P. Tang, L. Zhao, Remote sensing image scene classification using CNN-CapsNet. Remote. Sens. 11(5), 494 (2019)CrossRefGoogle Scholar
  26. 26.
    W. Zhang, J. Han, S. Deng, Heart sound classification based on scaled spectrogram and tensor decomposition. Biomed. Signal Process. Control 32, 20–28 (2017)CrossRefGoogle Scholar
  27. 27.
    O. Deperlioglu, Classification of phonocardiograms with convolutional neural networks, brain. Broad Res. Artif. Intell. Neurosci. 9(2), 22–33 (2018)Google Scholar
  28. 28.
    D.J. Hemanth, O. Deperlioglu, U. Kose, An enhanced diabetic retinopathy detection and classification approach using deep convolutional neural network. Neural Comput. Appl. (2019).  https://doi.org/10.1007/s00521-018-03974-0CrossRefGoogle Scholar
  29. 29.
    J.S. Duncan, N. Ayache, Medical image analysis: Progress over two decades and the challenges ahead. IEEE Trans. Pattern Anal. Mach. Intell. 22(1), 85–106 (2000)CrossRefGoogle Scholar
  30. 30.
    A. P. Dhawan, Medical Image Analysis, vol. 31, (Wiley, 2011)Google Scholar
  31. 31.
    A. Criminisi, J. Shotton, (eds.), Decision Forests for Computer Vision and Medical Image Analysis, (Springer Science & Business Media)Google Scholar
  32. 32.
    M. J. McAuliffe, F. M. Lalonde, D. McGarry, W. Gandler, K. Csaky, B. L. Trus, Medical Image Processing, Analysis and Visualization in Clinical Research. Proceedings 14th IEEE Symposium on Computer-Based Medical Systems (CBMS), (IEEE, 2001), pp. 381–386Google Scholar
  33. 33.
    J. L. Semmlow, B. Griffel, Biosignal and Medical Image Processing, (CRC press, 2014)Google Scholar
  34. 34.
    K. M. Martensen, Radiographic Image Analysis-E-Book, (Elsevier Health Sciences, 2013)Google Scholar
  35. 35.
    R. M. Rangayyan, Biomedical Image Analysis. (CRC press, 2004)Google Scholar
  36. 36.
    I. Bankman (ed.), Handbook of Medical Image Processing and Analysis, (Elsevier, 2008)Google Scholar
  37. 37.
    J.R. Hagerty, R.J. Stanley, H.A. Almubarak, N. Lama, R. Kasmi, P. Guo, W.V. Stoecker, Deep learning and handcrafted method fusion: Higher diagnostic accuracy for melanoma dermoscopy images. IEEE J. Biomed. Health Inform. 23(4), 1385–1391 (2019)CrossRefGoogle Scholar
  38. 38.
    Y. Gurovich, Y. Hanani, O. Bar, G. Nadav, N. Fleischer, D. Gelbman, L.M. Bird, Identifying facial phenotypes of genetic disorders using deep learning. Nat. Med. 25(1), 60–64 (2019)CrossRefGoogle Scholar
  39. 39.
    K. K. Wong, G. Fortino, D. Abbott, Deep learning-based cardiovascular image diagnosis: A promising challenge. Future Generation Computer Systems, (2019)Google Scholar
  40. 40.
    S. Dabeer, M.M. Khan, S. Islam, Cancer diagnosis in histopathological image: CNN based approach. Inform. Med. Unlocked 16, 100231 (2019)CrossRefGoogle Scholar
  41. 41.
    T. Jo, K. Nho, A.J. Saykin, Deep learning in Alzheimer’s disease: Diagnostic classification and prognostic prediction using neuroimaging data. Front. Aging Neurosci. 11, 220 (2019)CrossRefGoogle Scholar
  42. 42.
    J. Xu, K. Xue, K. Zhang, Current status and future trends of clinical diagnoses via image-based deep learning. Theranostics 9(25), 7556 (2019)CrossRefGoogle Scholar
  43. 43.
    C.M. Dourado Jr., S.P.P. da Silva, R.V.M. da Nóbrega, A.C.D.S. Barros, P.P. Reboucas Filho, V.H.C. de Albuquerque, Deep learning IoT system for online stroke detection in skull computed tomography images. Comput. Netw. 152, 25–39 (2019)CrossRefGoogle Scholar
  44. 44.
    X. Xu, X. Jiang, C. Ma, P. Du, X. Li, S. Lv, L. Yu, Y. Chen, J. Su, G. Lang, Y. Li, Deep Learning System to Screen Coronavirus Disease 2019 Pneumonia. arXiv preprint arXiv:2002.09334. (2020)
  45. 45.
    I. D. Apostolopoulos, T. Bessiana, Covid-19: Automatic Detection from X-ray Images Utilizing Transfer Learning with Convolutional Neural Networks. arXiv preprint arXiv:2003.11617. (2020)
  46. 46.
    A. Narin, C. Kaya, Z. Pamuk, Automatic Detection of Coronavirus Disease (COVID-19) Using X-ray Images and Deep Convolutional Neural Networks. arXiv preprint arXiv:2003.10849. (2020)
  47. 47.
    S. Wang, B. Kang, J. Ma, X. Zeng, M. Xiao, J. Guo, M. Cai, J. Yang, Y. Li, X. Meng, B. Xu, A Deep Learning Algorithm Using CT Images to Screen for Corona Virus Disease (COVID-19). medRxiv. (2020)Google Scholar
  48. 48.
    F. Shan, Y. Gao, J. Wang, W. Shi, N. Shi, M. Han, Z. Xue, Y. Shi, Lung Infection Quantification of COVID-19 in CT Images with Deep Learning. arXiv preprint arXiv:2003.04655. (2020)
  49. 49.
    L. Wang, A. Wong, COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 CASES From Chest Radiography Images. arXiv preprint arXiv:2003.09871. (2020)

Copyright information

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021

Authors and Affiliations

  • Utku Kose
    • 1
    Email author
  • Omer Deperlioglu
    • 2
  • Jafar Alzubi
    • 3
  • Bogdan Patrut
    • 4
  1. 1.Department of Computer EngineeringSüleyman Demirel UniversityIspartaTurkey
  2. 2.Department of Computer TechnologiesAfyon Kocatepe UniversityAfyonkarahisarTurkey
  3. 3.Faculty of EngineeringAl-Balqa Applied UniversityAl-SaltJordan
  4. 4.Faculty of Computer ScienceAlexandru Ioan Cuza University of IasiIasiRomania

Personalised recommendations