Advertisement

Skin Disease Recognition Using Deep Saliency Features and Multimodal Learning of Dermoscopy and Clinical Images

  • Zongyuan GeEmail author
  • Sergey Demyanov
  • Rajib Chakravorty
  • Adrian Bowling
  • Rahil Garnavi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10435)

Abstract

Skin cancer is the most common cancer world-wide, among which Melanoma the most fatal cancer, accounts for more than 10,000 deaths annually in Australia and United States. The 5-year survival rate for Melanoma can be increased over 90% if detected in its early stage. However, intrinsic visual similarity across various skin conditions makes the diagnosis challenging both for clinicians and automated classification methods. Many automated skin cancer diagnostic systems have been proposed in literature, all of which consider solely dermoscopy images in their analysis. In reality, however, clinicians consider two modalities of imaging; an initial screening using clinical photography images to capture a macro view of the mole, followed by dermoscopy imaging which visualizes morphological structures within the skin lesion. Evidences show that these two modalities provide complementary visual features that can empower the decision making process. In this work, we propose a novel deep convolutional neural network (DCNN) architecture along with a saliency feature descriptor to capture discriminative features of the two modalities for skin lesions classification. The proposed DCNN accepts a pair images of clinical and dermoscopic view of a single lesion and is capable of learning single-modality and cross-modality representations, simultaneously. Using one of the largest collected skin lesion datasets, we demonstrate that the proposed multi-modality method significantly outperforms single-modality methods on three tasks; differentiation between 15 various skin diseases, distinguishing cancerous (3 cancer types including melanoma) from non-cancerous moles, and detecting melanoma from benign cases.

References

  1. 1.
    Ballerini, L., Fisher, R.B., Aldridge, B., Rees, J.: A color and texture based hierarchical k-nn approach to the classification of non-melanoma skin lesions. In: Color Medical Image Analysis, pp. 63–86. IEEE (2013)CrossRefGoogle Scholar
  2. 2.
    de Brebisson, A., Montana, G.: Deep neural networks for anatomical brain segmentation. In: CVPR Workshops (2015)Google Scholar
  3. 3.
    Chopra, S., Hadsell, R., LeCun, Y.: Learning a similarity metric discriminatively, with application to face verification. In: CVPR (2005)Google Scholar
  4. 4.
    Academic Grade Pay Commission: Productivity commission: Heal workforce (2014)Google Scholar
  5. 5.
    Demyanov, S., Chakravorty, R., Abedini, M., Halpern, A., Garnavi, R.: Classification of dermoscopy patterns using deep convolutional neural networks. In: ISBI (2016)Google Scholar
  6. 6.
    Gao, Y., Beijbom, O., Zhang, N., Darrell, T.: Compact bilinear pooling. In: CVPR (2016)Google Scholar
  7. 7.
    Garnavi, R., Aldeen, M., Bailey, J.: Computer-aided diagnosis of melanoma using border-and wavelet-based texture analysis. IEEE Trans. Inf. Technol. Biomed. 16(6), 1239–1252 (2012)CrossRefGoogle Scholar
  8. 8.
    Ge, Z., Demyanov, S., Bozorgtabar, B., Mani, A., Chakravorty, R., Adrian, B., Garnavi, R.: Exploiting local and generic features for accurate skin lesions classification using clinical and dermoscopy imaging. In: ISBI (2017)Google Scholar
  9. 9.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  10. 10.
    Lin, T.Y., RoyChowdhury, A., Maji, S.: Bilinear CNN models for fine-grained visual recognition. In: ICCV (2015)Google Scholar
  11. 11.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2014)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv:1409.1556
  13. 13.
    American Cancer Society: Cancer facts & figures 2016 (2016)Google Scholar
  14. 14.
    Watts, C.G., Cust, A.E., Menzies, S.W., Mann, G.J., Morton, R.L.: Cost-effectiveness of skin surveillance through a specialized clinic for patients at high risk of melanoma. J. Clin. Oncol. 35(1), 63–71 (2016)CrossRefGoogle Scholar
  15. 15.
    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: CVPR (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Zongyuan Ge
    • 1
    Email author
  • Sergey Demyanov
    • 1
  • Rajib Chakravorty
    • 1
  • Adrian Bowling
    • 2
  • Rahil Garnavi
    • 1
  1. 1.IBM ResearchMelbourneAustralia
  2. 2.MoleMap NZ Ltd.AucklandNew Zealand

Personalised recommendations