Abstract
Saliency is important in medical image analysis in terms of detection and segmentation tasks. We propose a new method to extract uniqueness-driven saliency based on the uniqueness of intensity and spatial distributions within the images. The main novelty of this new saliency feature is that it is powerful in the detection of different types of lesions in different types of images without the need of tuning parameters for different problems. To evaluate its effectiveness, we have applied our method to the detection lesions of retinal images. Four different types of lesions: exudate, hemorrhage, microaneurysms and leakage from 7 independent public retinal image datasets of diabetic retinopathy and malarial retinopathy, were studied and the experimental results show that the proposed method is superior to the state-of-the-art methods.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The accurate identification of suspicious regions such as lesions in medical images is significant in the development of computer aided-diagnosis systems. Many different strategies have been developed towards automated detection of lesions in addressing different problems. However, these strategies often work on single type of lesions with careful parameter optimization and are unlikely to work for other types of lesions without problem specific optimization. It is therefore essential to develop generic algorithms that can have accurate and reliable performance in the detection of multiple lesions without handcrafted parameters.
In this work, we propose a new uniqueness-driven saliency approach for the detection of different types of lesions. The concept of saliency is that an object stands out relative to its neighbors by virtue of its uniqueness or rarity features [1, 2]. Saliency plays an important role in describing the tendency of those regions that may contain consequential matters for diagnostic purposes to draw the attention of human experts [3]. To evaluate its effectiveness, we aim to segment four different types of retinal lesions related to diabetic and malarial retinopathy: exudates (EX), microaneurysms (MA), and hemorrhages (HA) in retinal color fundus (CF) images [4], and leakage (LK) in fluorescein angiogram (FA) [3]. MA and HM are referred to as dark lesion while EX and LK as bright lesion.
Diabetic retinopathy (DR) is a leading cause of vision impairment and loss in the working age population, and early diagnosis of DR is important for the prevention of vision loss [4]. The severity of DR is usually determined based on feature-based grading by identifying features such as MA, HA and EX in color fundus images, and thus automated detection of these features is essential in the automated diagnosis of DR [5, 6]. On the other hand, malarial retinopathy is believed as the surrogate for the differential diagnosis of cerebral malaria, which is still a major cause of death and disability in children in sub-Saharan Africa [3], and LK in angiography is an important sign to determine the activities and development of lesions. Manual grading of these lesions is impractical given the scale of the problem and the shortage of trained professionals. Automated detection of these lesions is ideal to the early diagnosis and prevention of these challenges in a cost-effective way. However, the challenges are still open to be addressed [7].
A particular strength of the proposed method compared to previous work is that it has undergone rigorous quantitative evaluation on seven independent publicly available datasets including CF and FA images. The experimental results demonstrate its effectiveness and potential for wider medical applications.
2 Method
This work was inspired by the findings of Perazzi et al. [8] that the uniqueness of a component may be used to reveal the rarity of an image component. The relative intensity or spatial distribution of intensity are commonly used properties to investigate saliency [1, 8]. In particular, Cheng et al. [1] suggest that the spatial variance of color can measure whether an element is salient: a lower variance usually implies more salient.
Saliency at Coarse Scale: In order to reduce the computational cost, saliency is first derived from superpixels of an image under consideration by using the SLIC superpixel algorithm with default parameter settings [9]. Without loss of generality, we assume that N superpixels are generated, the color of any superpixel i and j, \(1 \le i,j \le N\), are \(\mathbf {c}_i\), \(\mathbf {c}_j\) while their positions are \(\mathbf {p}_i\) and \(\mathbf {p}_j\). The uniqueness saliency \({U}_i\) of superpixel i is then defined by combining the uniqueness in both the intensity and the spatial distribution domains:
where \(\mathcal {I}_i\) and \(\mathcal {D}_i\) indicates the uniqueness of superpixel i in the intensity and spatial distribution domain respectively. Here an exponential function is employed to emphasize \(\mathcal {D}_i\) that is of higher significance and greater diagnostic capability than the intensity measurement \(\mathcal {I}_i\) [8]. The parameter k represents the strength of the spatial weighting, and is set as 6 and −6 for dark and bright lesion detection, respectively. The uniqueness in the intensity domain \({I}_i\) of superpixel i can be obtained by computing the rarity compared to all the other superpixels j:
The local weighting function \({w}^{I}(\mathbf {p}_i, \mathbf {p}_j)\) is introduced here so that global and local contrast can be effectively combined with control over the influence radius. A standard Gaussian function is utilized to generate the local contrast in terms of geometric distances between superpixel i and j: \({{w}^{I}(\mathbf {p}_i, \mathbf {p}_j)=\frac{1}{\mathbf {Z}_i^I}\mathrm {exp}\{ -\frac{\Vert {\mathbf {p}_i-\mathbf {p}_j\Vert }^2}{2\sigma _p^2}\}}\), where standard deviation \(\sigma _{\mathbf {p}}\) controls the range of the uniqueness operator from 0 to 1 (where 1 = global uniqueness). The normalization term \(\mathbf {Z}_i^I\) ensure that \(\sum _{j=1}^{N}{{w}^{I}(\mathbf {p}_i, \mathbf {p}_j)}=1\). Equation (1) is decomposed by factoring out:
It can be seen that \(\sum ^N_{j=1} \mathbf {c}_j {w}^{I}(\mathbf {p}_i, \mathbf {p}_j)\) of second and third term can be regarded as the Gaussian blurring kernel on intensity \(\mathbf {c}_j\) and its square \(\mathbf {c}_{j}^{2}\), respectively. Similarly, the uniqueness of spatial distribution \(\mathcal {D}_i\) can be computed as:
where \(\mu _i=\sum _{j=1}^{N}{\mathbf {p}_j}{w}^{D}(\mathbf {c}_i, \mathbf {c}_j)\) defines the weighted mean position of color \(\mathbf {c}_i\), \({w}^{D}(\mathbf {c}_i, \mathbf {c}_j)\) indicates the similarity between color \(\mathbf {c}_i\) and \(\mathbf {c}_j\). Similar to Eq. (2), the color similarity weight is also a Gaussian \({{w}^{D}(\mathbf {c}_i, \mathbf {c}_j)=\frac{1}{\mathbf {Z}_i^D}\mathrm {exp}\{-\frac{\Vert {\mathbf {c}_i-\mathbf {c}_j\Vert }^2}{2\sigma _c^2}\}}\), where \(\mathbf {Z}_i^D\) can be defined similar to \(\mathbf {Z}_i^I\) while \(\sigma _c\) controls the sensitivity of the spatial distribution: larger values of \(\sigma _c\) indicate increased values of spatial distribution, and vice versa. Equation (4) can be expanded as:
Again, both terms \(\sum ^N_{j=1} \mathbf {p}_j {w}^{D}(\mathbf {c}_i, \mathbf {c}_j)\) and \(\mu _i^2\) can be effectively evaluated by Gaussian blurring. After determining \(\mathcal {I}_i\) and \(\mathcal {D}_i\), the uniqueness-based saliency \(\mathcal {U}_i\) can be calculated by using Eq. (1).
Saliency at Fine Scale: After coarse level estimation, the saliency at each pixel is temporarily assigned the saliency value of the superpixel it belongs to. Further refinement is made by introducing the concept of bilateral filter. That is, \(S_u=\sum _{v=1}^{M}{w_{uv}\mathbf {U}}_v\), where M is the total number of pixels in the image, \(\mathbf {U}\) is the saliency map at coarse scale, and the Gaussian weight \(w_{uv}\) is defined as \(w_{uv}=\frac{1}{Z_u}\mathrm {exp}(-\frac{1}{2}\alpha \Vert {\mathbf {c}_u-\mathbf {c}_v\Vert }^2+\beta \Vert {\mathbf {p}_u-\mathbf {p}_v\Vert }^2)\), where \({Z_u}\) can be defined similar to \(Z_i^D\). In other words, a weighted Gaussian filter which concerns both color and position is applied on the saliency map \(\mathbf {U}\) at coarse scale, in order to achieve the translation of per-superpixel saliency to per-pixel saliency. The trade-off between intensity and position is controlled by \(\alpha \) and \(\beta \), which in the present work were both set to 0.01. The result is that, the final saliency map highlights salient object regions of interest by suppressing the background of the image.
3 Experimental Evaluation
In practice, the large vessels and the optic disc may also be detected as ROIs, as these regions in retinal images are conspicuous objects, as shown in Figs. 1, 2, 3 and 4. To detect the exact lesions there is a need to remove them from the produced saliency map. In this work, the vessel segmentation outlined in [14] and the optic disc detection method discussed in [15] were employed.
We have thoroughly evaluated the proposed method on seven publicly available retinal image datasets. These are: the Retina Check project managed by Eindhoven University of Technology (RC-RGB-MA) [10]; the DiaretDB1 [16]; the Retinopathy Online Challenge training set (ROC) [4]; the e-ophtha [17]; the Messidor [18]; the Diabetic Macular Edema (DME-DUKE) [19] dataset collected by Duke University; and the Malarial Retinopathy dataset collected by the University of Liverpool (LIMA) [3].
3.1 Dark Lesion Detection
A large number of studies, i.e., ([5, 6, 11]) have detected lesions on prevalence of performance defined at image level, and it is difficult to categorize the pixels as true positives and false negatives. In this study, the sensitivity values against the average number of false positives per image (FPI) was used to measure the MA detection performance [4], and sensitivity values for FPI rates of 1/8, 1/4, 1/2, 1, 2, 4, and 8 were used. A final score (FS) was computed by averaging the sensitivity values obtained at each of these seven predefined FPIs [20].
Figure 1 shows that the proposed method has successfully detected the MA regions as salient. Table 1 compares the performances of MA detection of the proposed method and existing methods in terms of sensitivity against FPI on the e-ophtha, ROC, DiaretDB1, and RC-RGB-MA datasets. Due to the page limit, we provide readers with the performance from only two most recent MA detection methods: this is not intended to be taken as exhaustive. As can be observed, the proposed method outperforms both the state-of-the-art methods on all four datasets in terms of final score.
The ability of the proposed method to detect hemorrhages is demonstrated in Fig. 2. Evaluation was undertaken at image and pixel level respectively. At the image level, the intention is to reveal whether hemorrhage in an image can be detected or not. At the pixel level, the goal is to judge the detection accuracy in terms of overlapping. Table 2 demonstrates the sensitivity values achieved by the proposed method and the selected competitors over the DiaretDB1 dataset. It can be seen that the proposed method achieves the best performance at both image level and lesion level.
3.2 Bright Lesion Detection
We have evaluated the exudate detection method using three datasets: DiaretDB1, e-ophtha, and Messidor. Both the DiaretDB1 and the e-ophtha datasets provide a lesion map generated by experts, while the Messidor dataset provides a DR diagnostic for each image, but without manual annotation on exudate contours. However, Messidor contains information on the risk of macular edema, and the presence of exudates has been used to grade the risk of macular edema. Therefore, it is an important resource given the number of images available for the validation of the presence of exudate. Figure 3 indicates the proposed saliency detection method with application to exudate detection. Table 3 shows the evaluation results obtained by the proposed method and four state-of-the-art exudate detection methods: the area under the receiver operating characteristics curve (AUC) is employed to measure the performances. As can be observed, the proposed method achieves the best AUC scores, with scores of 0.952, 0.950, and 0.941, respectively in DiaretDB1, e-ophtha, and Messidor. It is worth noting that the AUC scores were computed at image level (presence of exudate).
In this work, the performance of the proposed method on leakage detection was obtained on two FA image datasets: DME-DUKE with DR pathology, and LIMA with MR pathology. Table 4 shows the performances of different methods in detecting leakage sites in terms of sensitivity, specificity, and AUC at the pixel level. It can be observed that the performances of our proposed method are better than those of the compared methods.
4 Conclusions
Development of the proposed framework was motivated by medical demands for a tool to measure various types of lesions in retinal images. The accurate detection of retinal lesions is a challenging problem due to image intensity inhomogeneity and the irregular shapes of lesions, with substantial variability in appearance. To address this problem, a novel saliency detection method is proposed, based on the uniqueness feature derived from the intensity and spatial distribution of components of the image. To the best of our knowledge, this is the first work on the automated detection of hemorrhages, microaneurysms, exudate, and leakage from both retinal color fundus images and fluorescein angiograms. The experimental results, based on seven publicly accessible DR and MR datasets, show that our method outperforms the most recently proposed methods. The proposed method is not only capable of identifying the presence of lesions in an image, but also has the ability to measure the size of such lesions.
References
Cheng, M., Zhang, G., Mitra, N., Huang, X., Hu, S.: Global contrast based salient region detection. In: Proceedings IEEE International CVPR, pp. 409–416 (2011)
Fu, H., Cao, X., Tu, Z.: Cluster-based co-saliency detection. IEEE Trans. Image Process. 22(10), 3766–3778 (2013)
Zhao, Y., et al.: Intensity and compactness enabled saliency estimation for leakage detection in diabetic and malarial retinopathy. IEEE Trans. Med. Imag. 36(1), 51–63 (2017)
Niemeijer, M., van Ginneken, B., Abramoff, M.D., et al.: Retinopathy online challenge: automatic detection of microaneurysms in digital color fundus photographs. IEEE Trans. Med. Imag 29(1), 185–195 (2010)
Seoud, L., Hurtut, T., Chelbi, J., Cheriet, F., Pierre Langlois, J.M.: Red lesion detection using dynamic shape features for diabetic retinopathy screening. IEEE Trans. Med. Imag 35(4), 1116–1126 (2016)
Giancardo, L., et al.: Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Med. Image Anal. 16(1), 216–226 (2012)
Pereira, C., Gonçalves, L., Ferreira, M.: Exudate segmentation in fundus images using an ant colony optimization approach. Inf. Sci. 296, 14–24 (2015)
Perazzi, F., Pritch, Y., Hornung, A.: Saliency filters: contrast based filtering for salient region detection. In: CVPR (2012)
Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P.: Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34, 2274–2282 (2012)
Dashtbozorg, B., Zhang, J., ter Haar Romeny, B.M.: Retinal microaneurysms detection using local convergence index features. CoRR, abs/1707.06865 (2017)
Zhang, X., et al.: Exudate detection in color retinal images for mass screening of diabetic retinopathy. Med. Image Anal. 18(7), 1026–1043 (2014)
Wang, S., Tang, H., et al.: Localizing microaneurysms in fundus images through singular spectrum analysis. IEEE Trans. Biomed. Eng. 64, 990–1002 (2017)
Dai, B., Wu, X., Bu, W.: Retinal microaneurysms detection using gradient vector analysis and class imbalance classification. PLOS ONE 11, 1–23 (2016)
Zhao, Y., Rada, L., Chen, K., Harding, S., Zheng, Y.: Automated vessel segmentation using infinite perimeter active contour model with hybrid region information with application. IEEE Trans. Med. Imag. 34(9), 1797–1807 (2015)
Cheng, J., et al.: Superpixel classification based optic disc and optic cup segmentation for glaucoma screening. IEEE Trans. Med. Imag 32, 1019–1032 (2013)
Kalesnykiene, V., et al.: Diaretdb1 diabetic retinopathy database and evaluation protocol. In: Proceedings Conference on MIUA (2007)
Decencire, E., et al.: TeleOphta: machine learning and image processing methods for teleophthalmology. IRBM 34(2), 196–203 (2013)
Kandemir, M., Hamprecht, F.A.: Computer-aided diagnosis from weak supervision: a benchmarking study. Comput. Med. Imag. Graph. 42, 44–50 (2015)
Rabbani, H., Allingham, M., Mettu, P., Cousins, S., Farsiu, S.: Fully automatic segmentation of fluorescein leakage in subjects with diabetic macular edema. Invest. Ophthalmol. Vis. Sci. 56(3), 1482–1492 (2015)
Antal, B., et al.: An ensemble-based system for microaneurysm detection and diabetic retinopathy grading. IEEE Trans. Biomed. Eng. 59(6), 1720–1726 (2012)
Quellec, G., Charrière, K., Boudi, Y., Cochener, B., Lamard, M.: Deep image mining for diabetic retinopathy screening. Med. Image Anal. 39, 178–193 (2017)
Gondal, W., et al.: Weakly-supervised localization of diabetic retinopathy lesions in retinal fundus images. CoRR, abs/1706.09634 (2017)
Zhou, L., et al.: Automatic hemorrhage detection in color fundus images based on gradual removal of vascular branches. In: Proceedings on ICIP, pp. 399–403 (2016)
Roychowdhury, S., Parhi, K.K.: DREAM: diabetic retinopathy analysis using machine learning. IEEE J. Biomed. Health Inform 18(5), 1717–1728 (2014)
Acknowledgment
This work was supported National Natural Science Foundation of China (61601029, 61572076), Grant of Ningbo 3315 Innovation Team, and China Association for Science and Technology (2016QNRC001).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhao, Y. et al. (2018). Uniqueness-Driven Saliency Analysis for Automated Lesion Detection with Applications to Retinal Diseases. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. MICCAI 2018. Lecture Notes in Computer Science(), vol 11071. Springer, Cham. https://doi.org/10.1007/978-3-030-00934-2_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-00934-2_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00933-5
Online ISBN: 978-3-030-00934-2
eBook Packages: Computer ScienceComputer Science (R0)