Advertisement

A dataset of laryngeal endoscopic images with comparative study on convolution neural network-based semantic segmentation

  • Max-Heinrich LavesEmail author
  • Jens Bicker
  • Lüder A. Kahrs
  • Tobias Ortmaier
Original Article

Abstract

Purpose

Automated segmentation of anatomical structures in medical image analysis is a prerequisite for autonomous diagnosis as well as various computer- and robot-aided interventions. Recent methods based on deep convolutional neural networks (CNN) have outperformed former heuristic methods. However, those methods were primarily evaluated on rigid, real-world environments. In this study, existing segmentation methods were evaluated for their use on a new dataset of transoral endoscopic exploration.

Methods

Four machine learning-based methods SegNet, UNet, ENet and ErfNet were trained with supervision on a novel 7-class dataset of the human larynx. The dataset contains 536 manually segmented images from two patients during laser incisions. The Intersection-over-Union (IoU) evaluation metric was used to measure the accuracy of each method. Data augmentation and network ensembling were employed to increase segmentation accuracy. Stochastic inference was used to show uncertainties of the individual models. Patient-to-patient transfer was investigated using patient-specific fine-tuning.

Results

In this study, a weighted average ensemble network of UNet and ErfNet was best suited for the segmentation of laryngeal soft tissue with a mean IoU of 84.7%. The highest efficiency was achieved by ENet with a mean inference time of 9.22 ms per image. It is shown that 10 additional images from a new patient are sufficient for patient-specific fine-tuning.

Conclusion

CNN-based methods for semantic segmentation are applicable to endoscopic images of laryngeal soft tissue. The segmentation can be used for active constraints or to monitor morphological changes and autonomously detect pathologies. Further improvements could be achieved by using a larger dataset or training the models in a self-supervised manner on additional unlabeled data.

Keywords

Computer vision Larynx Vocal folds Soft tissue Open-access dataset Machine learning Patient-to-patient fine-tuning 

Notes

Acknowledgements

We thank Giorgio Peretti from the Ospedale Policlinico San Martino, University of Genova, Italy, for providing us with the in vivo laryngeal data used in this study. We would also like to thank James Napier from the Institute of Lasers and Optics, University of Applied Sciences Emden-Leer, Germany, for his thorough proofreading of this manuscript.

Funding

This research has received funding from the European Union as being part of the ERFE OPhonLas project.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Formal consent

The endoscopic video images were acquired by Prof. Giorgio Peretti (Director of Otorhinolaryngology at Ospedale Policlinico San Martino, University of Genova). Patients gave their written consent for the procedure and the use of the data. No further approval is necessary for such endoscopic recordings. The videos were anonymized and made available inside the \(\upmu \)RALP consortium for further usage.

Ethical standards

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Supplementary material

Supplementary material 1 (mov 85920 KB)

References

  1. 1.
    Allin S, Galeotti J, Stetten G, Dailey SH (2004) Enhanced snake based segmentation of vocal folds. In: IEEE international symposium on biomedical imaging: nano to macro, vol 1, pp 812–815.  https://doi.org/10.1109/ISBI.2004.1398662
  2. 2.
    Aubreville M, Knipfer C, Oetter N, Jaremenko C, Rodner E, Denzler J, Bohr C, Neumann H, Stelzle F, Maier A (2017) Automatic classification of cancerous tissue in laserendomicroscopy images of the oral cavity using deep learning. Sci Rep.  https://doi.org/10.1038/s41598-017-12320-8
  3. 3.
    Badrinarayanan V, Kendall A, Cipolla R (2017) SegNet: a deep convolutional encoder–decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 39(12):2481–2495.  https://doi.org/10.1109/TPAMI.2016.2644615 CrossRefPubMedGoogle Scholar
  4. 4.
    Barbalata C, Mattos LS (2016) Laryngeal tumor detection and classification in endoscopic video. IEEE J Biomed Health Inf 20(1):322–332.  https://doi.org/10.1109/JBHI.2014.2374975 CrossRefGoogle Scholar
  5. 5.
    Barkmeier-Kraemer JM, Patel RR (2016) The next 10 years in voice evaluation and treatment. Semin Speech Lang 37(03):158–165.  https://doi.org/10.1055/s-0036-1583547 CrossRefPubMedGoogle Scholar
  6. 6.
    Cabezas M, Oliver A, Lladó X, Freixenet J, Cuadra MB (2011) A review of atlas-based segmentation for magnetic resonance brain images. Comput Methods Programs Biomed 104(3):e158–e177.  https://doi.org/10.1016/j.cmpb.2011.07.015 CrossRefPubMedGoogle Scholar
  7. 7.
    Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R, Franke U, Roth S, Schiele B (2016) The cityscapes dataset for semantic urban scene understanding. In: IEEE conference on computer vision and pattern recognition, pp 3213–3223.  https://doi.org/10.1109/CVPR.2016.350
  8. 8.
    Creswell A, Pouplin A, Bharath AA (2018) Denoising adversarial autoencoders: classifying skin lesions using limited labelled training data. IET Comput Vis 12(8):1105–1111.  https://doi.org/10.1049/iet-cvi.2018.5243 CrossRefGoogle Scholar
  9. 9.
    Doignon C, Graebling P, de Mathelin M (2005) Real-time segmentation of surgical instruments inside the abdominal cavity using a joint hue saturation color feature. Real-Time Imaging 11(5):429–442.  https://doi.org/10.1016/j.rti.2005.06.008 CrossRefGoogle Scholar
  10. 10.
    Friedrich DT, Scheithauer MO, Greve J, Duvvuri U, Sommer F, Hoffmann TK, Schuler PJ (2015) Potential advantages of a single-port, operator-controlled flexible endoscope system for transoral surgery of the larynx. Ann Otol Rhinol Laryngol 124(8):655–662.  https://doi.org/10.1177/0003489415575548 CrossRefPubMedGoogle Scholar
  11. 11.
    Gal Y, Ghahramani Z (2016) Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: Proceedings of the 33rd international conference on machine learning, vol 48, pp 1050–1059Google Scholar
  12. 12.
    García-Peraza-Herrera LC, Li W, Gruijthuijsen C, Devreker A, Attilakos G, Deprest J, Poorten EV, Stoyanov D, Vercauteren T, Ourselin S (2017) Real-time segmentation of non-rigid surgical tools based on deep learning and tracking. In: Lecture Notes on Computer Science LNCS, vol 10170, pp 84–95.  https://doi.org/10.1007/978-3-319-54057-3_8
  13. 13.
    Hashem S (1997) Optimal linear combinations of neural networks. Neural Netw 10(4):599–614.  https://doi.org/10.1016/S0893-6080(96)00098-6 CrossRefPubMedGoogle Scholar
  14. 14.
    He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: IEEE International conference on computer vision, pp 1026–1034.  https://doi.org/10.1109/ICCV.2015.123
  15. 15.
    Kendall A, Gal Y (2017) What uncertainties do we need in Bayesian deep learning for computer vision? Adv Neural Inf Process Syst 30:5574–5584Google Scholar
  16. 16.
    Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. ArXiv e-prints https://arxiv.org/abs/1412.6980
  17. 17.
    Lin TY, Goyal P, Girshick R, He K, Dollár P (2017) Focal loss for dense object detection. ArXiv e-prints arXiv:1708.02002
  18. 18.
    Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 3431–3440.  https://doi.org/10.1109/CVPR.2015.7298965
  19. 19.
    Noble JA, Boukerroui D (2006) Ultrasound image segmentation: a survey. IEEE Trans Med Imaging 25(8):987–1010.  https://doi.org/10.1109/TMI.2006.877092 CrossRefPubMedGoogle Scholar
  20. 20.
    Olabarriaga SD, Smeulders AWM (2001) Interaction in the segmentation of medical images: a survey. Med Image Anal 5:127–142.  https://doi.org/10.1016/S1361-8415(00)00041-4 CrossRefPubMedGoogle Scholar
  21. 21.
    Osma-Ruiz V, Godino-Llorente JI, Sáenz-Lechón N, Fraile R (2008) Segmentation of the glottal space from laryngeal images using the watershed transform. Comput Med Imaging Graph 32(3):193–201CrossRefPubMedGoogle Scholar
  22. 22.
    Pal NR, Pal SK (1993) A review on image segmentation techniques. Pattern Recognit 26(9):1277–1294.  https://doi.org/10.1016/0031-3203(93)90135-J CrossRefGoogle Scholar
  23. 23.
    Panek D, Skalski A, Zielinski T, Deliyski DD (2015) Voice pathology classification based on high-speed videoendoscopy. In: Annual international conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp 735–738.  https://doi.org/10.1109/EMBC.2015.7318467
  24. 24.
    Paszke A, Chaurasia A, Kim S, Culurciello E (2016) ENet: a deep neural network architecture for real-time semantic segmentation. ArXiv e-prints http://arxiv.org/abs/1606.02147
  25. 25.
    Paszke A, Gross S, Chintala S, Chanan G, Yang E, DeVito Z, Lin Z, Desmaison A, Antiga L, Lerer A (2017) Automatic differentiation in PyTorch. In: 31st Conference on neural information processing systems (NIPS). https://openreview.net/forum?id=BJJsrmfCZ. Accessed 1 Oct 2018
  26. 26.
    Phung SL, Bouzerdoum A, Chai D (2005) Skin segmentation using color pixel classification: analysis and comparison. IEEE Trans Pattern Anal Mach Intell 27(1):148–154.  https://doi.org/10.1109/TPAMI.2005.17 CrossRefPubMedGoogle Scholar
  27. 27.
    Rajab M, Woolfson M, Morgan S (2004) Application of region-based segmentation and neural network edge detection to skin lesions. Comput Med Imaging Graph 28(1):61–68.  https://doi.org/10.1016/S0895-6111(03)00054-5 CrossRefPubMedGoogle Scholar
  28. 28.
    Romera E, Álvarez JM, Bergasa LM, Arroyo R (2018) ERFNet: efficient residual factorized convnet for real-time semantic segmentation. IEEE Trans Intell Transp Syst 19(1):263–272.  https://doi.org/10.1109/TITS.2017.2750080 CrossRefGoogle Scholar
  29. 29.
    Ronneberger O, Fischer P, Brox T (2015) U-Net: convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention—MICCAI 2015, pp 234–241Google Scholar
  30. 30.
    Schoob A, Kundrat D, Kahrs LA, Ortmaier T (2017) Stereo vision-based tracking of soft tissue motion with application to online ablation control in laser microsurgery. Med Image Anal 40:80–95.  https://doi.org/10.1016/j.media.2017.06.004 CrossRefPubMedGoogle Scholar
  31. 31.
    Schoob A, Kundrat D, Lekon S, Kahrs LA, Ortmaier T (2016) Color-encoded distance for interactive focus positioning in laser microsurgery. Opt Lasers Eng 83:71–79.  https://doi.org/10.1016/j.optlaseng.2016.03.002 CrossRefGoogle Scholar
  32. 32.
    Schoob A, Laves MH, Kahrs LA, Ortmaier T (2016) Soft tissue motion tracking with application to tablet-based incision planning in laser surgery. Int J Comput Assist Radiol Surg 11(12):2325–2337.  https://doi.org/10.1007/s11548-016-1420-5 CrossRefPubMedGoogle Scholar
  33. 33.
    Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, Liang J (2016) Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging 35(5):1299–1312.  https://doi.org/10.1109/TMI.2016.2535302 CrossRefPubMedGoogle Scholar
  34. 34.
    Turkmen HI, Karsligil ME, Kocak I (2015) Classification of laryngeal disorders based on shape and vascular defects of vocal folds. Comput Biol Med 62:76–85.  https://doi.org/10.1016/j.compbiomed.2015.02.001 CrossRefGoogle Scholar
  35. 35.
    Unger J, Lohscheller J, Reiter M, Eder K, Betz CS, Schuster M (2015) A noninvasive procedure for early-stage discrimination of malignant and precancerous vocal fold lesions based on laryngeal dynamics analysis. Cancer Res 75(1):31–39.  https://doi.org/10.1158/0008-5472.CAN-14-1458 CrossRefPubMedGoogle Scholar
  36. 36.
    Wang G, Li W, Zuluaga MA, Pratt R, Patel PA, Aertsen M, Doel T, David AL, Deprest J, Ourselin S, Vercauteren T (2018) Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans Med Imaging 37(7):1562–1573.  https://doi.org/10.1109/TMI.2018.2791721 CrossRefPubMedGoogle Scholar
  37. 37.
    Zhou ZH, Wu J, Tang W (2002) Ensembling neural networks: many could be better than all. Artif Intell 137(1):239–263.  https://doi.org/10.1016/S0004-3702(02)00190-X CrossRefGoogle Scholar

Copyright information

© CARS 2019

Authors and Affiliations

  1. 1.Leibniz Universität HannoverHannoverGermany

Personalised recommendations