Meaningful Assessment of Robotic Surgical Style using the Wisdom of Crowds
Quantitative assessment of surgical skills is an important aspect of surgical training; however, the proposed metrics are sometimes difficult to interpret and may not capture the stylistic characteristics that define expertise. This study proposes a methodology for evaluating the surgical skill, based on metrics associated with stylistic adjectives, and evaluates the ability of this method to differentiate expertise levels.
We recruited subjects from different expertise levels to perform training tasks on a surgical simulator. A lexicon of contrasting adjective pairs, based on important skills for robotic surgery, inspired by the global evaluative assessment of robotic skills tool, was developed. To validate the use of stylistic adjectives for surgical skill assessment, posture videos of the subjects performing the task, as well as videos of the task were rated by crowd-workers. Metrics associated with each adjective were found using kinematic and physiological measurements through correlation with the crowd-sourced adjective assignment ratings. To evaluate the chosen metrics’ ability in distinguishing expertise levels, two classifiers were trained and tested using these metrics.
Crowd-assignment ratings for all adjectives were significantly correlated with expertise levels. The results indicate that naive Bayes classifier performs the best, with an accuracy of \(89\pm 12\), \(94\pm 8\), \(95\pm 7\), and \(100\pm 0\%\) when classifying into four, three, and two levels of expertise, respectively.
The proposed method is effective at mapping understandable adjectives of expertise to the stylistic movements and physiological response of trainees.
KeywordsSurgical skill assessment Motion analysis Crowd-sourcing Robotic surgery
This work was supported by the da Vinci® Standalone Simulator loan program at Intuitive Surgical (PI: Rege), and a clinical research grant from Intuitive Surgical, Inc. (PI: Majewicz Fey)
Compliance with ethical standards
Conflict of interest
The authors declared that they have no conflict of interest.
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. The study protocol was approved by both UTD and UTSW IRB offices (UTD #14-57, UTSW #STU 032015-053).
Informed consent was obtained from all individual participants included in the study.
- 1.Morris B (2005) Robotic surgery: applications, limitations, and impact on surgical education. Medscape Gen Med 7(3):72Google Scholar
- 7.Jain S, Barsness KA, Argall B (2015) Automated and objective assessment of surgical training: detection of procedural steps on videotaped performances. In: International conference on digital image computing: techniques and applications (DICTA), IEEE, pp 1–6Google Scholar
- 14.Lea C, Hager GD, Vidal R (2015) An improved model for segmentation and recognition of fine-grained activities with application to surgical training tasks. In: 2015 IEEE winter conference on applications of computer vision (WACV), pp 1123–1129Google Scholar
- 15.DiPietro R, Lea C, Malpani A, Ahmidi N, Vedula SS, Lee GI, Lee MR, Hager GD (2016) Recognizing surgical activities with recurrent neural networks. In: International conference on medical image computing and computer-assisted intervention. Springer, New York, pp 551–558Google Scholar
- 21.Ershad M, Koesters Z, Rege R, Majewicz A (2016) Meaningful assessment of surgical expertise: semantic labeling with data and crowds. In: International conference on medical image computing and computer-assisted intervention (MICCAI), Springer, New York, pp 508–515Google Scholar
- 22.Quigley M, Conley K, Gerkey BP, Faust J, Foote T, Leibs J, Wheeler R, Ng AY (2009) ROS: an open-source robot operating system. In: ICRA workshop on open source softwareGoogle Scholar
- 23.Moorthy K, Munz Y (2003) Objective assessment of technical skills in surgery. Br Med J 327(7422):1032–1037. https://doi.org/10.1136/bmj.327.7422.1032
- 25.Nisky I, Hsieh MH, Okamura AM (2013) A framework for analysis of surgeon arm posture variability in robot-assisted surgery. In: IEEE international conference on robotics and automation. pp 245–251. https://doi.org/10.1109/ICRA.2013.6630583
- 31.Halaki M, Ginn KA (2012) Normalization of EMG signals: to normalize or not to normalize and what to normalize to? In: Computational intelligence in electromyography analysis—a perspective on current applications and future challenges, pp 175–194, 40113Google Scholar