Advertisement

Crowdsourcing for Reference Correspondence Generation in Endoscopic Images

  • Lena Maier-Hein
  • Sven Mersmann
  • Daniel Kondermann
  • Christian Stock
  • Hannes Gotz Kenngott
  • Alexandro Sanchez
  • Martin Wagner
  • Anas Preukschas
  • Anna-Laura Wekerle
  • Stefanie Helfert
  • Sebastian Bodenstedt
  • Stefanie Speidel
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8674)

Abstract

Computer-assisted minimally-invasive surgery (MIS) is often based on algorithms that require establishing correspondences between endoscopic images. However, reference annotations frequently required to train or validate a method are extremely difficult to obtain because they are typically made by a medical expert with very limited resources, and publicly available data sets are still far too small to capture the wide range of anatomical/scene variance. Crowdsourcing is a new trend that is based on outsourcing cognitive tasks to many anonymous untrained individuals from an online community. To our knowledge, this paper is the first to investigate the concept of crowdsourcing in the context of endoscopic video image annotation for computer-assisted MIS. According to our study on publicly available in vivo data with manual reference annotations, anonymous non-experts obtain a median annotation error of 2 px (n = 10,000). By applying cluster analysis to multiple annotations per correspondence, this error can be reduced to about 1 px, which is comparable to that obtained by medical experts (n = 500). We conclude that crowdsourcing is a viable method for generating high quality reference correspondences in endoscopic video images.

Keywords

Medical Expert Knowledge Worker Endoscopic Image Reference Annotation Annotation Accuracy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Maier-Hein, L., Mountney, P., Bartoli, A., Elhawary, H., Elson, D., Groch, A., Kolb, A., Rodrigues, M., Sorger, J., Speidel, S., Stoyanov, D.: Optical techniques for 3d surface reconstruction in computer-assisted laparoscopic surgery. Med. Image Anal. 17, 974–996 (2013)CrossRefGoogle Scholar
  2. 2.
    Puerto, G.A., Mariottini, G.-L.: A comparative study of correspondence-search algorithms in MIS images. In: Ayache, N., Delingette, H., Golland, P., Mori, K. (eds.) MICCAI 2012, Part II. LNCS, vol. 7511, pp. 625–633. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  3. 3.
    Ranard, B., Ha, Y., Meisel, Z., Asch, D., Hill, S., Becker, L., Seymour, A., Merchant, R.: Crowdsourcing - harnessing the masses to advance health and medicine, a systematic review. J. Gen. Intern. Med. 29, 187–203 (2014)CrossRefGoogle Scholar
  4. 4.
    Mavandadi, S., Dimitrov, S., Feng, S., Yu, F., Sikora, U., Yaglidere, O., Padmanabhan, S., Nielsen, K., Ozcan, A.: Distributed medical image analysis and diagnosis through crowd-sourced games: A malaria case study. PLoS ONE 7, e37245 (2012)Google Scholar
  5. 5.
    Nguyen, T.B., Wang, S., Anugu, V., Rose, N., McKenna, M., Petrick, N., Burns, J.E., Summers, R.M.: Distributed human intelligence for colonic polyp classification in computer-aided detection for ct colonography. Radiology 262, 824–833 (2012)CrossRefGoogle Scholar
  6. 6.
    Foncubierta Rodríguez, A., Müller, H.: Ground truth generation in medical imaging: A crowdsourcing-based iterative approach. In: Proceedings of the ACM Multimedia 2012 Workshop on Crowdsourcing for Multimedia. CrowdMM 2012, pp. 9–14. ACM, New York (2012)CrossRefGoogle Scholar
  7. 7.
    Chen, J.J., Menezes, N.J., Bradley, A.D., North, T.: Opportunities for crowdsourcing research on amazon mechanical turk. Interfaces 5 (2011)Google Scholar
  8. 8.
    Team, R.C.: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2013)Google Scholar
  9. 9.
    Pinheiro, J., Bates, D., DebRoy, S., Sarkar, D.: R Core Team: nlme: Linear and Nonlinear Mixed Effects Models (2013); R package version 3.1-113Google Scholar
  10. 10.
    Fraley, C., Raftery, A.E.: Model-based clustering, discriminant analysis and density estimation. J. Am. Stat. Assoc. 97, 611–631 (2002)zbMATHMathSciNetCrossRefGoogle Scholar
  11. 11.
    Fraley, C., Raftery, A.E., Murphy, T.B., Scrucca, L.: mclust version 4 for R: Normal mixture modeling for model-based clustering, classification, and density estimation. Technical report, Technical Report No. 597, Department of Statistics, University of Washington (2012)Google Scholar
  12. 12.
    Maier-Hein, L., Mersmann, S., Kondermann, D., Bodenstedt, S., Sanchez, A., Stock, C., Kenngott, H., Eisenmann, M., Speidel, S.: Can masses of non-experts train highly accurate image classifiers? A crowdsourcing approach to instrument segmentation in laparoscopic images. In Barillot, C., Golland, P., Hornegger, J., Howe, R., eds.: International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). Volume 17., Springer, LNCS (2014) (in press)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Lena Maier-Hein
    • 1
  • Sven Mersmann
    • 1
  • Daniel Kondermann
    • 2
  • Christian Stock
    • 3
  • Hannes Gotz Kenngott
    • 4
  • Alexandro Sanchez
    • 2
  • Martin Wagner
    • 4
  • Anas Preukschas
    • 4
  • Anna-Laura Wekerle
    • 4
  • Stefanie Helfert
    • 4
  • Sebastian Bodenstedt
    • 5
  • Stefanie Speidel
    • 5
  1. 1.Computer-assisted InterventionsGerman Cancer Research CenterGermany
  2. 2.Heidelberg Collaboratory for Image ProcessingUniversity of HeidelbergGermany
  3. 3.Institute of Medical Biometry and InformaticsUniversity of HeidelbergGermany
  4. 4.Department of General, Visceral and Transplant SurgeryUniversity of HeidelbergGermany
  5. 5.Institute for AnthropomaticsKarlsruhe Institute of Technology (KIT)Germany

Personalised recommendations