The Heidelberg VR Score: development and validation of a composite score for laparoscopic virtual reality training
- 174 Downloads
Virtual reality (VR-)trainers are well integrated in laparoscopic surgical training. However, objective feedback is often provided in the form of single parameters, e.g., time or number of movements, making comparisons and evaluation of trainees’ overall performance difficult. Therefore, a new standard for reporting outcome data is highly needed. The aim of this study was to create a weighted, expert-based composite score, to offer simple and direct evaluation of laparoscopic performance on common VR-trainers.
Materials and methods
An integrated analytic hierarchy process-Delphi survey was conducted with 14 international experts to achieve a consensus on the importance of different skill categories and parameters in evaluation of laparoscopic performance. A scoring algorithm was established to allow comparability between tasks and VR-trainers. A weighted composite score was calculated for basic skills tasks and peg transfer on the LapMentor™ II and III and validated for both VR-trainers.
Five major skill categories (time, efficiency, safety, dexterity, and outcome) were identified and weighted in two Delphi rounds. Safety, with a weight of 67%, was determined the most important category, followed by efficiency with 17%. The LapMentor™-specific score was validated using 15 (14) novices and 9 experts; the score was able to differentiate between both groups for basic skills tasks and peg transfer (LapMentor™ II: Exp: 86.5 ± 12.7, Nov. 52.8 ± 18.3; p < 0.001; LapMentor™ III: Exp: 80.8 ± 7.1, Nov: 50.6 ± 16.9; p < 0.001).
An effective and simple performance measurement was established to propose a new standard in analyzing and reporting VR outcome data—the Heidelberg virtual reality (VR) score. The scoring algorithm and the consensus results on the importance of different skill aspects in laparoscopic surgery are universally applicable and can be transferred to any simulator or task. By incorporating specific expert baseline data for the respective task, comparability between tasks, studies, and simulators can be achieved.
KeywordsMinimally invasive surgery Virtual reality trainer Score Skill assessment Analytic hierarchy process Delphi
The authors would like to thank all members of the expert panel for their support: Esther Bonrath, Germany; Sanne Botden, Netherlands; Julian Bucher, Germany; Dieter Hahnloser, Switzerland, Daniel A. Hashimoto, USA; Tobias Huber, Germany; Georg Linke, Switzerland; Sören Torge Mees, Germany; Daniel Miscovic, UK; Christoph Reißfelder, Germany; Marlies Schijven, Netherlands; Lee Swanström, France; Siska van Bruwane, Belgium; Markus Wallwiener, Germany. Furthermore, we would like to thank Hubertus Feußner, Laurents Stassen, and Thomas Vogel for sharing their experience for this project. Furthermore, the authors would like to thank Mr. Nicolas Billen for his help with implementing the scoring algorithm, Mr. Samuel Kilian for his help during the calculation process, and Ms. Linhong Li for her help with setting up the website.
Compliance with ethical standards
Mona W. Schmidt, Karl-Friedrich Kowalewski, Marc L. Schmidt, Erica Wennberg, Carly R. Garrow, Sang Paik, Laura Benner, Marlies Schijven, Beat-Peter Müller Stich, and Felix Nickel have no conflict of interest or financial ties to disclose.
- 11.Schijven M, Jakimowicz J (2003) Construct validity: experts and novices performing on the Xitact LS500 laparoscopy simulator. Surg Technol Int 11:32–36Google Scholar
- 12.Larsen CR, Grantcharov T, Aggarwal R, Tully A, Sorensen JL, Dalsgaard T, Ottesen B (2006) Objective assessment of gynecologic laparoscopic skills using the LapSimGyn virtual reality simulator. 20:1460–1466Google Scholar
- 13.Van Sickle KR, Ritter EM, McClusky DA III, Lederman A, Baghai M, Gallagher AG, Smith CD (2007) Attempted establishment of proficiency levels for laparoscopic performance on a national scale using simulation: the results from the 2004 SAGES minimally invasive surgical trainer-virtual reality (MIST-VR) learning center study. Surg Endosc 21:5–10CrossRefGoogle Scholar
- 22.Saaty TL (1980) The analytic heirarchy process: planning, priority setting, resource allocation. McGraw-Hill, New YorkGoogle Scholar
- 24.Cotin S, Stylopoulos N, Ottensmeyer M, Neumann P, Rattner D, Dawson S (2002) Metrics for laparoscopic skills trainers: the weakest link! In: Dohi T, Kikinis R (eds) Medical image computing and computer-assisted intervenetion—MICCAI. Springer, Berlin, Heidelberg, pp 35–43Google Scholar
- 29.Dalkey NC (1969) The Delphi method: an experimental study of group opinion RAND CORP SANTA MONICA CALIFGoogle Scholar
- 33.Blumenthal AL (1977) The process of cognition. Experimental psychology series. Prentice Hall/Pearson Education, New JerseyGoogle Scholar
- 35.Skulmoski GJ, Hartman FT, Krahn J (2007) The Delphi method for graduate research. J Inf Technol Educ 6:1–21Google Scholar
- 36.Keeney S, McKenna H, Hasson F (2010) The Delphi technique in nursing and health research. Wiley, ChichesterGoogle Scholar
- 38.Chowriappa AJ, Shi Y, Raza SJ, Ahmed K, Stegemann A, Wilding G, Kaouk J, Peabody JO, Menon M, Hassett JM, Kesavadas T, Guru KA (2013) Development and validation of a composite scoring system for robot-assisted surgical training—the Robotic skills assessment score. J Surg Res 185:561–569CrossRefGoogle Scholar
- 40.Agrusa A, Di Buono G, Buscemi S, Cucinella G, Romano G, Gulotta G (2018) 3D laparoscopic surgery: a prospetive clinical trial. Oncotarget 9:17325Google Scholar
- 43.Hsu C-C, Sandford BA (2007) The Delphi technique: making sense of consensus. Pract Assess Res Eval 12:1–8Google Scholar