Advertisement

Learning visual representations with optimum-path forest and its applications to Barrett’s esophagus and adenocarcinoma diagnosis

  • Luis A. de SouzaJr.
  • Luis C. S. Afonso
  • Alanna Ebigbo
  • Andreas Probst
  • Helmut Messmann
  • Robert Mendel
  • Christian Hook
  • Christoph Palm
  • João P. PapaEmail author
Intelligent Biomedical Data Analysis and Processing
  • 33 Downloads

Abstract

Considering the increase in the number of the Barrett’s esophagus (BE) in the last decade, and its expected continuous increase, methods that can provide an early diagnosis of dysplasia in BE-diagnosed patients may provide a high probability of cancer remission. The limitations related to traditional methods of BE detection and management encourage the creation of computer-aided tools to assist in this problem. In this work, we introduce the unsupervised Optimum-Path Forest (OPF) classifier for learning visual dictionaries in the context of Barrett’s esophagus (BE) and automatic adenocarcinoma diagnosis. The proposed approach was validated in two datasets (MICCAI 2015 and Augsburg) using three different feature extractors (SIFT, SURF, and the not yet applied to the BE context A-KAZE), as well as five supervised classifiers, including two variants of the OPF, Support Vector Machines with Radial Basis Function and Linear kernels, and a Bayesian classifier. Concerning MICCAI 2015 dataset, the best results were obtained using unsupervised OPF for dictionary generation using supervised OPF for classification purposes and using SURF feature extractor with accuracy nearly to \(78\%\) for distinguishing BE patients from adenocarcinoma ones. Regarding the Augsburg dataset, the most accurate results were also obtained using both OPF classifiers but with A-KAZE as the feature extractor with accuracy close to \(73\%\). The combination of feature extraction and bag-of-visual-words techniques showed results that outperformed others obtained recently in the literature, as well as we highlight new advances in the related research area. Reinforcing the significance of this work, to the best of our knowledge, this is the first one that aimed at addressing computer-aided BE identification using bag-of-visual-words and OPF classifiers, being the application of unsupervised technique in the BE feature calculation the major contribution of this work. It is also proposed a new BE and adenocarcinoma description using the A-KAZE features, not yet applied in the literature.

Keywords

Barrett’s esophagus Optimum-path forest Machine learning Adenocarcinoma Image processing 

Notes

Acknowledgements

The authors are grateful to DFG Grant PA 1595/3-1, Capes/Alexander von Humboldt Foundation Grant No. BEX 0581-16-0, CNPq Grants 306166/2014-3 and 307066/2017-7, as well as FAPESP Grants 2013/07375-0, 2014/12236-1, and 2016/19403-6. This material is based upon work supported in part by funds provided by Intel® AI Academy program under Fundunesp Grant No. 2597.2017.

References

  1. 1.
    Afonso LCS, Papa JP, Papa LP, Marana AN, Rocha AR (2012) Automatic visual dictionary generation through optimum-path forest clustering. In: 2012 19th IEEE international conference on image processing, pp 1897–1900Google Scholar
  2. 2.
    Alcantarilla PF, Nuevo J, Bartoli A (2013) Fast explicit diffusion for accelerated features in nonlinear scale spaces. In: Proceedings of the British machine vision conference, BMVC, pp 1–11Google Scholar
  3. 3.
    Amorim WP, Falcão AX, Papa JP, Carvalho MH (2016) Improving semi-supervised learning through optimum connectivity. Pattern Recognit 60:72–85CrossRefGoogle Scholar
  4. 4.
    Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (surf). Comput Vis Image Underst 110(3):346–359CrossRefGoogle Scholar
  5. 5.
    Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):1–27CrossRefGoogle Scholar
  6. 6.
    Csurka G, Dance CR, Fan L, Willamowski J, Bray C (2004) Visual categorization with bags of keypoints. In: Proceedings of the workshop on statistical learning in computer vision, pp 1–22Google Scholar
  7. 7.
    Dent J (2011) Barrett’s esophagus: a historical perspective, an update on core practicalities and predictions on future evolutions of management. J Gastroenterol Hepatol 26:11–30CrossRefGoogle Scholar
  8. 8.
    Falcão AX, Stolfi J, Lotufo RA (2004) The image foresting transform: theory, algorithms, and applications. IEEE Trans Pattern Anal Mach Intell 26(1):19–29CrossRefGoogle Scholar
  9. 9.
    Fei-Fei L, Perona P (2005) A bayesian hierarchical model for learning natural scene categories. In: Proceedings of the IEEE conference on computer vision and pattern recognition, CVPR, vol 2, pp 524–531Google Scholar
  10. 10.
    González-Castro V, Valdés-Hernández MC, Armitage PA, Wardlaw JM (2016) Automatic rating of perivascular spaces in brain MRI using bag of visual words. In: ICIAR, vol 9730. Springer, pp 642–649Google Scholar
  11. 11.
    Hassan AR, Haque MA (2015) Computer-aided gastrointestinal hemorrhage detection in wireless capsule endoscopy videos. Comput Biol Med 122:341–353Google Scholar
  12. 12.
    Itseez: Open source computer vision library (2015) https://github.com/itseez/opencv. Accessed 7 Jan 2019
  13. 13.
    Klomp S, van der Sommen F, Swager AF, Zinger S, Schoon EJ, Curvers WL, Bergman JJ, de With PHN (2017) Evaluation of image features and classification methods for barrett’s cancer detection using vle imaging. In: Proceedings of the SPIE medical imaging, vol 10134, p 101340DGoogle Scholar
  14. 14.
    Koh JEW, Ng EYK, Bhandary SV, Hagiwara Y, Laude A, Acharya UR (2018) Automated retinal health diagnosis using pyramid histogram of visual words and fisher vector techniques. Comput Biol Med 92:204–209CrossRefGoogle Scholar
  15. 15.
    Lagergren J, Lagergren P (2010) Oesophageal cancer. BMJ 341:c6280.  https://doi.org/10.1136/bmj.c6280 CrossRefGoogle Scholar
  16. 16.
    Lepage C, Rachet B, Jooste V (2008) Continuing rapid increase in esophageal adenocarcinoma in england and wales. Am J Gastroenterol 103:2694–2699CrossRefGoogle Scholar
  17. 17.
    Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110MathSciNetCrossRefGoogle Scholar
  18. 18.
    Mendel R, Ebigbo A, Probst A, Messmann H, Palm C (2017) Barrett’s esophagus analysis using convolutional neural networks. Springer, Berlin, Heidelberg, pp 80–85Google Scholar
  19. 19.
    Nakamura RYM, Fonseca LMG, Santos JA, Torres RS, Yang XS, Papa JP (2014) Nature-inspired framework for hyperspectral band selection. IEEE Trans Geosci Remote Sens 52(4):2126–2137.  https://doi.org/10.1109/TGRS.2013.2258351 CrossRefGoogle Scholar
  20. 20.
    Papa JP, Falcão AX, Albuquerque VHC, Tavares JMRS (2012) Efficient supervised optimum-path forest classification for large datasets. Pattern Recognit 45(1):512–520CrossRefGoogle Scholar
  21. 21.
    Papa JP, Falcão AX, Suzuki CTN (2009) Supervised pattern classification based on optimum-path forest. Int J Imaging Syst Technol 19(2):120–131CrossRefGoogle Scholar
  22. 22.
    Papa JP, Fernandes SEN, Falcão AX (2017) Optimum-path forest based on k-connectivity: theory and applications. Pattern Recognit Lett 87:117–126CrossRefGoogle Scholar
  23. 23.
    Papa JP, Rocha AR (2011) Image categorization through optimum path forest and visual words. In: Proceedings of the 18th IEEE international conference on image processing, pp 3525–3528Google Scholar
  24. 24.
    Papa JP, Suzuki CTN, Falcao XA LibOPF: a library for the design of optimum-path forest classifiers. Software version 2.1 available at http://www.ic.unicamp.br/~afalcao/libopf/index.html. Accessed 7 Jan 2019
  25. 25.
    Peng X, Wang L, Wang X, Qiao Y (2016) Bag of visual words and fusion methods for action recognition: comprehensive study and good practice. Comput Vis Image Underst 150:109–125CrossRefGoogle Scholar
  26. 26.
    Phoa KN, Pouw RE, Bisschops R, Pech O, Ragunath K, Weusten BLAM et al (2016) Multimodality endoscopic eradication for neoplastic barrett oesophagus: results of an european multicentre study (euro-ii). Gut 65(4):555–562CrossRefGoogle Scholar
  27. 27.
    Pisani RJ, Nakamura RYM, Riedel PS, Zimback CRL, Falcão AX, Papa JP (2014) Toward satellite-based land cover classification through optimum-path forest. IEEE Trans Geosci Remote Sens 52(10):6075–6085CrossRefGoogle Scholar
  28. 28.
    Rocha LM, Cappabianco FAM, Falcão AX (2009) Data clustering as an optimum-path forest problem with applications in image analysis. Int J Imaging Syst Technol 19(2):50–68CrossRefGoogle Scholar
  29. 29.
    Seguí S, Drozdzal M, Pascual G, Radeva P, Malagelada C, Azpiroz F, Vitriá J (2016) Generic feature learning for wireless capsule endoscopy analysis. Comput Biol Med 79:163–172CrossRefGoogle Scholar
  30. 30.
    Seibel EJ, Carroll RE, Dominitz JA, Johnston RS, Melville CD, Lee CM, Seitz SM, Kimmey MB (2008) Tethered capsule endoscopy, a low-cost and high-performance alternative technology for the screening of esophageal cancer and barrett’s esophagus. IEEE Trans Biomed Eng 55(3):1032–1042CrossRefGoogle Scholar
  31. 31.
    Sharma P, Bergman JJGHM, Goda K, Kato M et al (2016) Development and validation of a classification system to identify high-grade dysplasia and esophageal adenocarcinoma in barrett’s esophagus using narrow-band imaging. Gastroenterology 150(3):591–598CrossRefGoogle Scholar
  32. 32.
    Sharma P, Brill J, Canto M, DeMarco D, Fennerty B, Gupta N, Laine L (2015) White paper aga: advanced imaging in barrett’s esophagus. Clin Gastroenterol Hepatol 13(13):2209–2218.  https://doi.org/10.1016/j.cgh.2015.09.017 CrossRefGoogle Scholar
  33. 33.
    Shi J, Malik J (2000) Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell 22(8):888–905CrossRefGoogle Scholar
  34. 34.
    Souza Jr LA, Afonso LCS, Palm C, Papa JP (2017) Barrett’s esophagus identification using optimum-path forest. In: 30th SIBGRAPI conference on graphics, patterns and images, pp 308–314.  https://doi.org/10.1109/SIBGRAPI.2017.47
  35. 35.
    Souza Jr LA, Hook C, Papa JP, Palm C (2017) Barrett’s esophagus analysis using SURF features. Springer, Berlin, Heidelberg, pp 141–146.  https://doi.org/10.1007/978-3-662-54345-0_34
  36. 36.
    Souza LA Jr, Palm C, Mendel R, Hook C, Ebigbo A, Probst A, Messmann H, Weber S, Papa JP (2018) A survey on barrett’s esophagus analysis using machine learning. Comput Biol Med 96:203–213.  https://doi.org/10.1016/j.compbiomed.2018.03.014 CrossRefGoogle Scholar
  37. 37.
    Suzuki CTN, Gomes JF, Falcão AX, Papa JP, Hoshino-Shimizu S (2013) Automatic segmentation and classification of human intestinal parasites from microscopy images. IEEE Trans Biomed Eng 60(3):803–812CrossRefGoogle Scholar
  38. 38.
    Swager AF, van der Sommen F, Zinger S, Meijer SL, Schoon EJ, Bergman J, de With PH, Curvers WL (2016) 237 Feasibility of a computer algorithm for detection of early barrett’s neoplasia using volumetric laser endomicroscopy. Gastroenterology 150(4, Supplement 1):S56CrossRefGoogle Scholar
  39. 39.
    van der Sommen F, Zinger S, Curvers WL, Bisschops R, Pech O, Weusten BLAM, Bergman JJGHM, de With PHN, Schoon EJ (2016) Computer-aided detection of early neoplastic lesions in barrett’s esophagus. Endoscopy 48(7):617–624CrossRefGoogle Scholar
  40. 40.
    Wilcoxon F (1945) Individual comparisons by ranking methods. Biom Bull 1(6):80–83CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of ComputingFederal University of São Carlos - UFScarSão CarlosBrazil
  2. 2.Medizinische Klinik IIIKlinikum AugsburgAugsburgGermany
  3. 3.Regensburg Medical Image Computing (ReMIC)Ostbayerische Technische Hochschule Regensburg - OTH RegensburgRegensburgGermany
  4. 4.Department of ComputingSão Paulo State University - UNESPBauruBrazil

Personalised recommendations