Surgical Endoscopy

, Volume 33, Issue 7, pp 2093–2103 | Cite as

The Heidelberg VR Score: development and validation of a composite score for laparoscopic virtual reality training

  • Mona W. Schmidt
  • Karl-Friedrich Kowalewski
  • Marc L. Schmidt
  • Erica Wennberg
  • Carly R. Garrow
  • Sang Paik
  • Laura Benner
  • Marlies P. Schijven
  • Beat P. Müller-Stich
  • Felix NickelEmail author



Virtual reality (VR-)trainers are well integrated in laparoscopic surgical training. However, objective feedback is often provided in the form of single parameters, e.g., time or number of movements, making comparisons and evaluation of trainees’ overall performance difficult. Therefore, a new standard for reporting outcome data is highly needed. The aim of this study was to create a weighted, expert-based composite score, to offer simple and direct evaluation of laparoscopic performance on common VR-trainers.

Materials and methods

An integrated analytic hierarchy process-Delphi survey was conducted with 14 international experts to achieve a consensus on the importance of different skill categories and parameters in evaluation of laparoscopic performance. A scoring algorithm was established to allow comparability between tasks and VR-trainers. A weighted composite score was calculated for basic skills tasks and peg transfer on the LapMentor™ II and III and validated for both VR-trainers.


Five major skill categories (time, efficiency, safety, dexterity, and outcome) were identified and weighted in two Delphi rounds. Safety, with a weight of 67%, was determined the most important category, followed by efficiency with 17%. The LapMentor™-specific score was validated using 15 (14) novices and 9 experts; the score was able to differentiate between both groups for basic skills tasks and peg transfer (LapMentor™ II: Exp: 86.5 ± 12.7, Nov. 52.8 ± 18.3; p < 0.001; LapMentor™ III: Exp: 80.8 ± 7.1, Nov: 50.6 ± 16.9; p < 0.001).


An effective and simple performance measurement was established to propose a new standard in analyzing and reporting VR outcome data—the Heidelberg virtual reality (VR) score. The scoring algorithm and the consensus results on the importance of different skill aspects in laparoscopic surgery are universally applicable and can be transferred to any simulator or task. By incorporating specific expert baseline data for the respective task, comparability between tasks, studies, and simulators can be achieved.


Minimally invasive surgery Virtual reality trainer Score Skill assessment Analytic hierarchy process Delphi 



The authors would like to thank all members of the expert panel for their support: Esther Bonrath, Germany; Sanne Botden, Netherlands; Julian Bucher, Germany; Dieter Hahnloser, Switzerland, Daniel A. Hashimoto, USA; Tobias Huber, Germany; Georg Linke, Switzerland; Sören Torge Mees, Germany; Daniel Miscovic, UK; Christoph Reißfelder, Germany; Marlies Schijven, Netherlands; Lee Swanström, France; Siska van Bruwane, Belgium; Markus Wallwiener, Germany. Furthermore, we would like to thank Hubertus Feußner, Laurents Stassen, and Thomas Vogel for sharing their experience for this project. Furthermore, the authors would like to thank Mr. Nicolas Billen for his help with implementing the scoring algorithm, Mr. Samuel Kilian for his help during the calculation process, and Ms. Linhong Li for her help with setting up the website.

Compliance with ethical standards


Mona W. Schmidt, Karl-Friedrich Kowalewski, Marc L. Schmidt, Erica Wennberg, Carly R. Garrow, Sang Paik, Laura Benner, Marlies Schijven, Beat-Peter Müller Stich, and Felix Nickel have no conflict of interest or financial ties to disclose.


  1. 1.
    Buckley CE, Nugent E, Ryan D, Neary PC (2012) Virtual reality—a new era in surgical training. In: Eichenberg C (ed) Virtual reality in psychological, medical and pedagogical applications. InTech, Rijeka. Google Scholar
  2. 2.
    Yiannakopoulou E, Nikiteas N, Perrea D, Tsigris C (2015) Virtual reality simulators and training in laparoscopic surgery. Int J Surg 13:60–64CrossRefGoogle Scholar
  3. 3.
    Nickel F, Brzoska JA, Gondan M, Rangnick HM, Chu J, Kenngott HG, Linke GR, Kadmon M, Fischer L, Muller-Stich BP (2015) Virtual reality training versus blended learning of laparoscopic cholecystectomy: a randomized controlled trial with laparoscopic novices. Medicine 94:e764CrossRefGoogle Scholar
  4. 4.
    Alaker M, Wynn GR, Arulampalam T (2016) Virtual reality training in laparoscopic surgery: a systematic review & meta-analysis. Int J Surg 29:85–94CrossRefGoogle Scholar
  5. 5.
    Kowalewski KF, Garrow CR, Proctor T, Preukschas AA, Friedrich M, Muller PC, Kenngott HG, Fischer L, Muller-Stich BP, Nickel F (2018) LapTrain: multi-modality training curriculum for laparoscopic cholecystectomy-results of a randomized controlled trial. Surg Endosc 32:3830–3838CrossRefGoogle Scholar
  6. 6.
    Beyer-Berjot L, Berdah S, Hashimoto DA, Darzi A, Aggarwal R (2016) A virtual reality training curriculum for laparoscopic colorectal surgery. J Surg Educ 73:932–941CrossRefGoogle Scholar
  7. 7.
    Thijssen AS, Schijven MP (2010) Contemporary virtual reality laparoscopy simulators: quicksand or solid grounds for assessing surgical trainees? Am J Surg 199:529–541CrossRefGoogle Scholar
  8. 8.
    Yamaguchi S, Konishi K, Yasunaga T, Yoshida D, Kinjo N, Kobayashi K, Ieiri S, Okazaki K, Nakashima H, Tanoue K, Maehara Y, Hashizume M (2007) Construct validity for eye-hand coordination skill on a virtual reality laparoscopic surgical simulator. Surg Endosc 21:2253–2257CrossRefGoogle Scholar
  9. 9.
    Aggarwal R, Crochet P, Dias A, Misra A, Ziprin P, Darzi A (2009) Development of a virtual reality training curriculum for laparoscopic cholecystectomy. Br J Surg 96:1086–1093CrossRefGoogle Scholar
  10. 10.
    Wilson M, McGrath J, Vine S, Brewer J, Defriend D, Masters R (2010) Psychomotor control in a virtual laparoscopic surgery training environment: gaze control parameters differentiate novices from experts. Surg Endosc 24:2458–2464CrossRefGoogle Scholar
  11. 11.
    Schijven M, Jakimowicz J (2003) Construct validity: experts and novices performing on the Xitact LS500 laparoscopy simulator. Surg Technol Int 11:32–36Google Scholar
  12. 12.
    Larsen CR, Grantcharov T, Aggarwal R, Tully A, Sorensen JL, Dalsgaard T, Ottesen B (2006) Objective assessment of gynecologic laparoscopic skills using the LapSimGyn virtual reality simulator. 20:1460–1466Google Scholar
  13. 13.
    Van Sickle KR, Ritter EM, McClusky DA III, Lederman A, Baghai M, Gallagher AG, Smith CD (2007) Attempted establishment of proficiency levels for laparoscopic performance on a national scale using simulation: the results from the 2004 SAGES minimally invasive surgical trainer-virtual reality (MIST-VR) learning center study. Surg Endosc 21:5–10CrossRefGoogle Scholar
  14. 14.
    Avgerinos DV, Goodell KH, Waxberg S, Cao CG, Schwaitzberg SD (2005) Comparison of the sensitivity of physical and virtual laparoscopic surgical training simulators to the user’s level of experience. Surg Endosc 19:1211–1215CrossRefGoogle Scholar
  15. 15.
    van Dongen KW, Tournoij E, van der Zee DC, Schijven MP, Broeders IA (2007) Construct validity of the LapSim: can the LapSim virtual reality simulator distinguish between novices and experts? Surg Endosc 21:1413–1417CrossRefGoogle Scholar
  16. 16.
    Andreatta PB, Woodrum DT, Gauger PG, Minter RM (2008) LapMentor metrics possess limited construct validity. Simul Healthc 3:16–25CrossRefGoogle Scholar
  17. 17.
    Zhang A, Hunerbein M, Dai Y, Schlag PM, Beller S (2008) Construct validity testing of a laparoscopic surgery simulator (Lap Mentor): evaluation of surgical skill with a virtual laparoscopic training simulator. Surg Endosc 22:1440–1444CrossRefGoogle Scholar
  18. 18.
    McDougall EM, Corica FA, Boker JR, Sala LG, Stoliar G, Borin JF, Chu FT, Clayman RV (2006) Construct validity testing of a laparoscopic surgical simulator. J Am Coll Surg 202:779–787CrossRefGoogle Scholar
  19. 19.
    Rosenthal R, von Websky MW, Hoffmann H, Vitz M, Hahnloser D, Bucher HC, Schäfer J (2015) How to report multiple outcome metrics in virtual reality simulation. Eur Surg 47:202–205CrossRefGoogle Scholar
  20. 20.
    Okrainec A, Soper NJ, Swanstrom LL, Fried GM (2011) Trends and results of the first 5 years of fundamentals of laparoscopic surgery (FLS) certification testing. Surg Endosc 25:1192–1198CrossRefGoogle Scholar
  21. 21.
    Peters JH, Fried GM, Swanstrom LL, Soper NJ, Sillin LF, Schirmer B, Hoffman K, the SFLSC (2004) Development and validation of a comprehensive program of education and assessment of the basic fundamentals of laparoscopic surgery. Surgery 135:21–27CrossRefGoogle Scholar
  22. 22.
    Saaty TL (1980) The analytic heirarchy process: planning, priority setting, resource allocation. McGraw-Hill, New YorkGoogle Scholar
  23. 23.
    Dong Y, Zhang G, Hong W, Xu Y (2010) Consensus models for AHP group decision making under row geometric mean prioritization method. Decis Support Syst 49:281–289CrossRefGoogle Scholar
  24. 24.
    Cotin S, Stylopoulos N, Ottensmeyer M, Neumann P, Rattner D, Dawson S (2002) Metrics for laparoscopic skills trainers: the weakest link! In: Dohi T, Kikinis R (eds) Medical image computing and computer-assisted intervenetion—MICCAI. Springer, Berlin, Heidelberg, pp 35–43Google Scholar
  25. 25.
    Moorthy K, Munz Y, Sarker SK, Darzi A (2003) Objective assessment of technical skills in surgery. BMJ 327:1032–1037CrossRefGoogle Scholar
  26. 26.
    McDougall EM (2007) Validation of surgical simulators. J Endourol 21:244–247CrossRefGoogle Scholar
  27. 27.
    Crawford G, Williams C (1985) A note on the analysis of subjective judgment matrices. J Math Psychol 29:387–405CrossRefGoogle Scholar
  28. 28.
    Aguaron J, Moreno-Jiménez JMa (2003) The geometric consistency index: approximated thresholds. Eur J Oper Res 147:137–145CrossRefGoogle Scholar
  29. 29.
    Dalkey NC (1969) The Delphi method: an experimental study of group opinion RAND CORP SANTA MONICA CALIFGoogle Scholar
  30. 30.
    Landeta J (2006) Current validity of the Delphi method in social sciences. Technol Forecast Soc Chang 73:467–482CrossRefGoogle Scholar
  31. 31.
    Awad M, Awad F, Carter F, Jervis B, Buzink S, Foster J, Jakimowicz J, Francis NK (2018) Consensus views on the optimum training curriculum for advanced minimally invasive surgery: a delphi study. Int J Surg 53:137–142CrossRefGoogle Scholar
  32. 32.
    Cuschieri A, Francis N, Crosby J, Hanna GB (2001) What do master surgeons think of surgical competence and revalidation? 1. Am J Surg 182:110–116CrossRefGoogle Scholar
  33. 33.
    Blumenthal AL (1977) The process of cognition. Experimental psychology series. Prentice Hall/Pearson Education, New JerseyGoogle Scholar
  34. 34.
    Palter VN, Graafland M, Schijven MP, Grantcharov TP (2012) Designing a proficiency-based, content validated virtual reality curriculum for laparoscopic colorectal surgery: a Delphi approach. Surgery 151:391–397CrossRefGoogle Scholar
  35. 35.
    Skulmoski GJ, Hartman FT, Krahn J (2007) The Delphi method for graduate research. J Inf Technol Educ 6:1–21Google Scholar
  36. 36.
    Keeney S, McKenna H, Hasson F (2010) The Delphi technique in nursing and health research. Wiley, ChichesterGoogle Scholar
  37. 37.
    Trevelyan EG, Robinson PN (2015) Delphi methodology in health research: how to do it? Eur J Integr Med 7:423–428CrossRefGoogle Scholar
  38. 38.
    Chowriappa AJ, Shi Y, Raza SJ, Ahmed K, Stegemann A, Wilding G, Kaouk J, Peabody JO, Menon M, Hassett JM, Kesavadas T, Guru KA (2013) Development and validation of a composite scoring system for robot-assisted surgical training—the Robotic skills assessment score. J Surg Res 185:561–569CrossRefGoogle Scholar
  39. 39.
    Stylopoulos N, Cotin S, Maithel SK, Ottensmeyer M, Jackson PG, Bardsley RS, Neumann PF, Rattner DW, Dawson SL (2004) Computer-enhanced laparoscopic training system (CELTS): bridging the gap. Surg Endosc 18:782–789CrossRefGoogle Scholar
  40. 40.
    Agrusa A, Di Buono G, Buscemi S, Cucinella G, Romano G, Gulotta G (2018) 3D laparoscopic surgery: a prospetive clinical trial. Oncotarget 9:17325Google Scholar
  41. 41.
    Milkovich G, Annoni AJ, Mahoney T (1972) The use of the Delphi procedures in manpower forecasting. Manag Sci 19:381–388CrossRefGoogle Scholar
  42. 42.
    Khorramshahgol R, Moustakis V (1988) Delphic hierarchy process (DHP): a methodology for priority setting derived from the Delphi method and analytical hierarchy process. Eur J Oper Res 37:347–354CrossRefGoogle Scholar
  43. 43.
    Hsu C-C, Sandford BA (2007) The Delphi technique: making sense of consensus. Pract Assess Res Eval 12:1–8Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Mona W. Schmidt
    • 1
  • Karl-Friedrich Kowalewski
    • 1
  • Marc L. Schmidt
    • 2
  • Erica Wennberg
    • 1
  • Carly R. Garrow
    • 1
  • Sang Paik
    • 1
  • Laura Benner
    • 3
  • Marlies P. Schijven
    • 4
  • Beat P. Müller-Stich
    • 1
  • Felix Nickel
    • 1
    Email author
  1. 1.Department of General, Visceral, and Transplantation SurgeryHeidelberg University HospitalHeidelbergGermany
  2. 2.KarlsruheGermany
  3. 3.Department of Medical Biometry and InformaticsUniversity of HeidelbergHeidelbergGermany
  4. 4.Deparment of Surgery, Amsterdam Gastroenterology and Metabolism, Amsterdam UMCUniversity of AmsterdamAmsterdamThe Netherlands

Personalised recommendations