Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Assessment of Robotic Console Skills (ARCS): construct validity of a novel global rating scale for technical skills in robotically assisted surgery

  • 336 Accesses

  • 4 Citations

Abstract

Background

Skill assessment during robotically assisted surgery remains challenging. While the popularity of the Global Evaluative Assessment of Robotics Skills (GEARS) has grown, its lack of discrimination between independent console skills limits its usefulness. The purpose of this study was to evaluate construct validity and interrater reliability of a novel assessment designed to overcome this limitation.

Methods

We created the Assessment of Robotic Console Skills (ARCS), a global rating scale with six console skill domains. Fifteen volunteers who were console surgeons for 0 (“novice”), 1–100 (“intermediate”), or >100 (“experienced”) robotically assisted procedures performed three standardized tasks. Three blinded raters scored the task videos using ARCS, with a 5-point Likert scale for each skill domain. Scores were analyzed for evidence of construct validity and interrater reliability.

Results

Group demographics were indistinguishable except for the number of robotically assisted procedures performed (p = 0.001). The mean scores of experienced subjects exceeded those of novices in dexterity (3.8 > 1.4, p < 0.001), field of view (4.1 > 1.8, p < 0.001), instrument visualization (3.9 > 2.2, p < 0.001), manipulator workspace (3.6 > 1.9, p = 0.001), and force sensitivity (4.3 > 2.6, p < 0.001). The mean scores of intermediate subjects exceeded those of novices in dexterity (2.8 > 1.4, p = 0.002), field of view (2.8 > 1.8, p = 0.021), instrument visualization (3.2 > 2.2, p = 0.045), manipulator workspace (3.1 > 1.9, p = 0.004), and force sensitivity (3.7 > 2.6, p = 0.033). The mean scores of experienced subjects exceeded those of intermediates in dexterity (3.8 > 2.8, p = 0.003), field of view (4.1 > 2.8, p < 0.001), and instrument visualization (3.9 > 3.2, p = 0.044). Rater agreement in each domain demonstrated statistically significant concordance (p < 0.05).

Conclusions

We present strong evidence for construct validity and interrater reliability of ARCS. Our study shows that learning curves for some console skills plateau faster than others. Therefore, ARCS may be more useful than GEARS to evaluate distinct console skills. Future studies will examine why some domains did not adequately differentiate between subjects and applications for intraoperative use.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2

References

  1. 1.

    Verner L, Oleynikov D, Holtmann S, Haider H, Zhukov L (2003) Measurements of the level of surgical expertise using flight path analysis from da Vinci robotic surgical system. Stud Health Technol Inform 94:373–378

  2. 2.

    Moorthy K, Munz Y, Dosis A, Hernandez J, Martin S, Bello F, Rockall T, Darzi A (2004) Dexterity enhancement with robotic surgery. Surg Endosc 18(5):790–795. doi:10.1007/s00464-003-8922-2

  3. 3.

    Lin HC, Shafran I, Yuh D, Hager GD (2006) Towards automatic skill evaluation: detection and segmentation of robot-assisted surgical motions. Comput Aided Surg 11(5):220–230

  4. 4.

    Narazaki K, Oleynikov D, Stergiou N (2006) Robotic surgery training and performance: identifying objective variables for quantifying the extent of proficiency. Surg Endosc 20(1):96–103

  5. 5.

    Kumar R, Jog A, Malpani A, Vagvolgyi B, Yuh D, Nguyen H, Hager G, Chen CC (2012) Assessing system operation skills in robotic surgery trainees. Int J Med Robot 8(1):118–124

  6. 6.

    Kumar R, Jog A, Vagvolgyi B, Nguyen H, Hager G, Chen CC, Yuh D (2012) Objective measures for longitudinal assessment of robotic surgery training. J Thorac Cardiovasc Surg 143(3):528–534

  7. 7.

    Tausch TJ, Kowalewski TM, White LW, McDonough PS, Brand TC, Lendvay TS (2012) Content and construct validation of a robotic surgery curriculum using an electromagnetic instrument tracker. J Urol 188(3):919–923

  8. 8.

    Jog A, Itkowitz B, Liu M, DiMaio S, Hager G, Curet M, Kumar R Towards integrating task information in skills assessment for dexterous tasks in surgery and simulation. In: Robotics and Automation (ICRA), 2011 IEEE International Conference on, 9–13 May 2011. pp 5273–5278

  9. 9.

    Goh AC, Goldfarb DW, Sander JC, Miles BJ, Dunkin BJ (2012) Global evaluative assessment of robotic skills: validation of a clinical assessment tool to measure robotic surgical skills. J Urol 187(1):247–252

  10. 10.

    Reznick R, Regehr G, MacRae H, Martin J, McCulloch W (1997) Testing technical skill via an innovative “bench station” examination. Am J Surg 173(3):226–230

  11. 11.

    Vassiliou MC, Feldman LS, Andrew CG, Bergman S, Leffondre K, Stanbridge D, Fried GM (2005) A global assessment tool for evaluation of intraoperative laparoscopic skills. Am J Surg 190(1):107–113

  12. 12.

    Hung AJ, Jayaratna IS, Teruya K, Desai MM, Gill IS, Goh AC (2013) Comparative assessment of three standardized robotic surgery training methods. BJU International 112(6):864–871

  13. 13.

    Ramos P, Montez J, Tripp A, Ng CK, Gill IS, Hung AJ (2014) Face, content, construct and concurrent validity of dry laboratory exercises for robotic training using a global assessment tool. BJU International 113(5):836–842

  14. 14.

    Aghazadeh MA, Jayaratna IS, Hung AJ, Pan MM, Desai MM, Gill IS, Goh AC (2015) External validation of global evaluative assessment of robotic skills (GEARS). Surg Endosc 29(11):3261–3266

  15. 15.

    White LW, Kowalewski TM, Dockter RL, Comstock B, Hannaford B, Lendvay TS (2015) Crowd-sourced assessment of technical skill: a valid method for discriminating basic robotic surgery skills. J Endourol 29(11):1295–1301

  16. 16.

    Siddiqui NY, Galloway ML, Geller EJ, Green IC, Hur HC, Langston K, Pitter MC, Tarr ME, Martino MA (2014) Validity and reliability of the robotic objective structured assessment of technical skills. Obstet Gynecol 123(6):1193–1199

  17. 17.

    Lovegrove C, Novara G, Mottrie A, Guru KA, Brown M, Challacombe B, Popert R, Raza J, Van der Poel H, Peabody J, Dasgupta P, Ahmed K (2016) Structured and modular training pathway for robot-assisted radical prostatectomy (RARP): validation of the RARP assessment score and learning curve assessment. Eur Urol 69(3):526–535

  18. 18.

    Raza SJ, Field E, Jay C, Eun D, Fumo M, Hu JC, Lee D, Mehboob Z, Nyquist J, Peabody JO, Sarle R, Stricker H, Yang Z, Wilding G, Mohler JL, Guru KA (2015) Surgical competency for urethrovesical anastomosis during robot-assisted radical prostatectomy: development and validation of the robotic anastomosis competency evaluation. Urology 85(1):27–32

  19. 19.

    Liu M, Curet M (2015) A review of training research and virtual reality simulators for the da vinci surgical system. Teach Learn Med 27(1):12–26

  20. 20.

    Lenihan JP Jr, Kovanda C, Seshadri-Kreaden U (2008) What is the learning curve for robotic assisted gynecologic surgery? J Minim Invasive Gynecol 15(5):589–594

  21. 21.

    Park EJ, Kim CW, Cho MS, Kim DW, Min BS, Baik SH, Lee KY, Kim NK (2014) Is the learning curve of robotic low anterior resection shorter than laparoscopic low anterior resection for rectal cancer?: a comparative analysis of clinicopathologic outcomes between robotic and laparoscopic surgeries. Medicine 93(25):e109

  22. 22.

    Finnegan KT, Meraney AM, Staff I, Shichman SJ (2012) Da vinci skills simulator construct validation study: correlation of prior robotic experience with overall score and time score simulator performance. Urology 80(2):330–336

  23. 23.

    Jarc AM, Curet M (2014) Construct validity of nine new inanimate exercises for robotic surgeon training using a standardized setup. Surg Endosc 28(2):648–656

  24. 24.

    Green A Kappa Statistics for Multiple Raters Using Categorical Classifications. In: Proceedings of the Twenty-Second Annual Conference of SAS Users Group, 1997

  25. 25.

    Dulan G, Rege RV, Hogg DC, Gilberg-Fisher KK, Tesfay ST, Scott DJ (2012) Content and face validity of a comprehensive robotic skills training program for general surgery, urology, and gynecology. Am J Surg 203(4):535–539

  26. 26.

    Dulan G, Rege RV, Hogg DC, Gilberg-Fisher KM, Arain NA, Tesfay ST, Scott DJ (2012) Developing a comprehensive, proficiency-based training program for robotic surgery. Surgery 152(3):477–488

  27. 27.

    Lyons C, Goldfarb D, Jones SL, Badhiwala N, Miles B, Link R, Dunkin BJ (2013) Which skills really matter? proving face, content, and construct validity for a commercial robotic simulator. Surg Endosc 27(6):2020–2030

  28. 28.

    Downing SM (2006) Face validity of assessments: faith-based interpretations or evidence-based science? Med Educ 40(1):7–8

  29. 29.

    Carter SC, Chiang A, Shah G, Kwan L, Montgomery JS, Karam A, Tarnay C, Guru KA, Hu JC (2015) Video-based peer feedback through social networking for robotic surgery simulation: a multicenter randomized controlled trial. Ann Surg 261(5):870–875

  30. 30.

    Chen C, White L, Kowalewski T, Aggarwal R, Lintott C, Comstock B, Kuksenok K, Aragon C, Holst D, Lendvay T (2014) Crowd-Sourced Assessment of Technical Skills: a novel method to evaluate surgical performance. J Surg Res 187(1):65–71

  31. 31.

    Shin DH, Dalag L, Azhar RA, Santomauro M, Satkunasivam R, Metcalfe C, Dunn M, Berger A, Djaladat H, Nguyen M, Desai MM, Aron M, Gill IS, Hung AJ (2015) A novel interface for the telementoring of robotic surgery. BJU International 116(2):302–308

Download references

Acknowledgements

We would like to thank Paula Ezell DVM, Libette Roman-Laureano DVM, and Norris Cabel for their assistance in obtaining the tissue samples. We would also like to thank Michael Banks, Mark Bello, Lillian Kansaku, and Pauline Russell for their assistance with videos and photographs.

Author information

Correspondence to May Liu.

Ethics declarations

Disclosures

At the time of this study, all authors were employed by Intuitive Surgical, Inc., which funded this project.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Liu, M., Purohit, S., Mazanetz, J. et al. Assessment of Robotic Console Skills (ARCS): construct validity of a novel global rating scale for technical skills in robotically assisted surgery. Surg Endosc 32, 526–535 (2018). https://doi.org/10.1007/s00464-017-5694-7

Download citation

Keywords

  • Robotic surgery
  • Technical skills
  • Skill assessment
  • Surgical education
  • Global rating scale