Advertisement

Automatic and near real-time stylistic behavior assessment in robotic surgery

  • M. ErshadEmail author
  • R. Rege
  • Ann Majewicz Fey
Original Article
  • 57 Downloads

Abstract

Purpose

Automatic skill evaluation is of great importance in surgical robotic training. Extensive research has been done to evaluate surgical skill, and a variety of quantitative metrics have been proposed. However, these methods primarily use expert selected features which may not capture latent information in movement data. In addition, these features are calculated over the entire task time and are provided to the user after the completion of the task. Thus, these quantitative metrics do not provide users with information on how to modify their movements to improve performance in real time. This study focuses on automatic stylistic behavior recognition that has the potential to be implemented in near real time.

Methods

We propose a sparse coding framework for automatic stylistic behavior recognition in short time intervals using only position data from the hands, wrist, elbow, and shoulder. A codebook is built for each stylistic adjective using the positive and negative labels provided for each trial through crowd sourcing. Sparse code coefficients are obtained for short time intervals (0.25 s) in a trial using this codebook. A support vector machine classifier is trained and validated through tenfold cross-validation using the sparse codes from the training set.

Results

The results indicate that the proposed dictionary learning method is able to assess stylistic behavior performance in near real time using user joint position data with improved accuracy compared to using PCA features or raw data.

Conclusion

The possibility to automatically evaluate a trainee’s style of movement in short time intervals could provide the user with online customized feedback and thus improve performance during surgical tasks.

Keywords

Surgical skill assessment Crowdsourcing Robotic surgery 

Notes

Acknowledgements

This work was supported by the da Vinci® Standalone Simulator loan program at Intuitive Surgical (PI: Rege) and a clinical research grant from Intuitive Surgical, Inc. (PI: Majewicz Fey).

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

References

  1. 1.
    Aghdasi N, Bly R, White LW, Hannaford B, Moe K, Lendvay TS (2015) Crowd-sourced assessment of surgical skills in cricothyrotomy procedure. J Surg Res 196(2):302–306CrossRefGoogle Scholar
  2. 2.
    Aharon M, Elad M, Bruckstein A (2006) rmk-svd: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans Signal Process 54(11):4311–4322CrossRefGoogle Scholar
  3. 3.
    Birkmeyer JD, Finks JF, O’reilly A, Oerline M, Carlin AM, Nunn AR, Dimick J, Banerjee M, Birkmeyer NJ (2013) Surgical skill and complication rates after bariatric surgery. N Engl J Med 369(15):1434–1442CrossRefGoogle Scholar
  4. 4.
    Chen C, White L, Kowalewski T, Aggarwal R, Lintott C, Comstock B, Kuksenok K, Aragon C, Holst D, Lendvay T (2014) Crowd-sourced assessment of technical skills: a novel method to evaluate surgical performance. J Surg Res 187(1):65–71CrossRefGoogle Scholar
  5. 5.
    Chen SP, Kirsch S, Zlatev DV, Chang T, Comstock B, Lendvay TS, Liao JC (2016) Optical biopsy of bladder cancer using crowd-sourced assessment. J Am Med Assoc (JAMA) Surg 151(1):90–93Google Scholar
  6. 6.
    Darzi A, Mackay S (2001) Assessment of surgical competence. BMJ Qual Saf 10(suppl 2):64–69Google Scholar
  7. 7.
    Datta V, Mackay S, Mandalia M, Darzi A (2001) The use of electromagnetic motion tracking analysis to objectively measure open surgical skill in the laboratory-based model. J Am Coll Surg 193(5):479–485CrossRefGoogle Scholar
  8. 8.
    Datta V, Chang A, Mackay S, Darzi A (2002) The relationship between motion analysis and surgical technical assessments. Am J Surg 184(1):70–73CrossRefGoogle Scholar
  9. 9.
    Deal SB, Lendvay TS, Haque MI, Brand T, Comstock B, Warren J, Alseidi A (2016) Crowd-sourced assessment of technical skills: an opportunity for improvement in the assessment of laparoscopic surgical skills. Am J Surg 211(2):398–404CrossRefGoogle Scholar
  10. 10.
    Derossis AM, Fried GM, Abrahamowicz M, Sigman HH, Barkun JS, Meakins JL (1998) Development of a model for training and evaluation of laparoscopic skills. Am J Surg 175(6):482–487CrossRefGoogle Scholar
  11. 11.
    Dhamala M, Rangarajan G, Ding M (2008) Estimating granger causality from Fourier and wavelet transforms of time series data. Phys Rev Lett 100(1):018701CrossRefGoogle Scholar
  12. 12.
    Efron B, Hastie T, Johnstone I, Tibshirani R (2004) Least angle regression. Ann Stat 32(2):407–499CrossRefGoogle Scholar
  13. 13.
    El Moudden I, Ouzir M, Benyacoub B, ElBernoussi S (2016) Mining human activity using dimensionality reduction and pattern recognition. Contemp Eng Sci: CES 9:21CrossRefGoogle Scholar
  14. 14.
    Ershad M, Koesters Z, Rege R, Majewicz A (2016) Meaningful assessment of surgical expertise: semantic labeling with data and crowds. In: International conference on medical image computing and computer-assisted intervention (MICCAI). Springer, pp 508–515Google Scholar
  15. 15.
    Ershad M, Rege R, Fey AM (2018a) Meaningful assessment of robotic surgical style using the wisdom of crowds. Int J Comput Assist Radiol Surg: IJCARS, 1–12Google Scholar
  16. 16.
    Ershad M, Rege R, Majewicz A (2018b) Surgical skill level assessment using automatic feature extraction methods. In: Medical imaging: image-guided procedures, robotic interventions, and modeling. International Society for Optics and Photonics, p 6Google Scholar
  17. 17.
    Fard MJ, Ameri S, Darin Ellis R, Chinnam RB, Pandya AK, Klein MD (2018) Automated robot-assisted surgical skill evaluation: predictive analytics approach. Int J Med Robot Comput Assist Surg 14(1):e1850CrossRefGoogle Scholar
  18. 18.
    Goh AC, Goldfarb DW, Sander JC, Miles BJ, Dunkin BJ (2012) Global evaluative assessment of robotic skills: validation of a clinical assessment tool to measure robotic surgical skills. J Urol 187(1):247–252.  https://doi.org/10.1016/j.juro.2011.09.032 CrossRefGoogle Scholar
  19. 19.
    Grober ED, Roberts M, Shin EJ, Mahdi M, Bacal V (2010) Intraoperative assessment of technical skills on live patients using economy of hand motion: establishing learning curves of surgical competence. Am J Surg 199(1):81–85.  https://doi.org/10.1016/j.amjsurg.2009.07.033 CrossRefGoogle Scholar
  20. 20.
    Hayter MA, Friedman Z, Bould MD, Hanlon JG, Katznelson R, Borges B, Naik VN (2009) Validation of the Imperial College Surgical Assessment Device (ICSAD) for labour epidural placement. Can J Anesth 56(6):419–426.  https://doi.org/10.1007/s12630-009-9090-1 CrossRefGoogle Scholar
  21. 21.
    Holst D, Kowalewski TM, White LW, Brand TC, Harper JD, Sorenson MD, Kirsch S, Lendvay TS (2015) Crowd-sourced assessment of technical skills: an adjunct to urology resident surgical simulation training. J Endourol 29(5):604–609CrossRefGoogle Scholar
  22. 22.
    Hoyer PO (2002) Non-negative sparse coding. In: Proceedings of the 12th IEEE workshop on neural networks for signal processing. IEEE, pp 557–565Google Scholar
  23. 23.
    Hung AJ, Chen J, Che Z, Nilanon T, Jarc A, Titus M, Oh PJ, Gill IS, Liu Y (2018) Utilizing machine learning and automated performance metrics to evaluate robot-assisted radical prostatectomy performance and predict outcomes. J Endourol 32(5):438–444CrossRefGoogle Scholar
  24. 24.
    Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Fei-Fei L (2018) Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. arXiv preprint arXiv:1802.08774
  25. 25.
    Karg M, Jenke R, Seiberl W, Kühnlenz K, Schwirtz A, Buss M (2009) A comparison of pca, kpca and lda for feature extraction to recognize affect in gait kinematics. In: 3rd IEEE international conference on affective computing and intelligent interaction and workshops (ACII). IEEE, pp 1–6Google Scholar
  26. 26.
    Kazanzides P, Chen Z, Deguet A, Fischer GS, Taylor RH, DiMaio SP (2014) An open-source research kit for the da vinci® surgical system. In: 2014 IEEE international conference on robotics and automation (ICRA). IEEE, pp 6434–6439Google Scholar
  27. 27.
    Kirkwood RN, Resende RA, Magalhães C, Gomes HA, Mingoti SA, Sampaio RF (2011) Application of principal component analysis on gait kinematics in elderly women with knee osteoarthritis. Braz J Phys Ther 15(1):52–58CrossRefGoogle Scholar
  28. 28.
    Kowalewski TM, Comstock B, Sweet R, Schaffhausen C, Menhadji A, Averch T, Box G, Brand T, Ferrandino M, Kaouk J (2016) Crowd-sourced assessment of technical skills for validation of basic laparoscopic urologic skills tasks. J Urol 195(6):1859–1865CrossRefGoogle Scholar
  29. 29.
    Law H, Ghani K, Deng J (2017) Surgeon technical skill assessment using computer vision based analysis. In: Machine learning for healthcare conference, pp 88–99Google Scholar
  30. 30.
    Liang K, Xing Y, Li J, Wang S, Li A, Li J (2018) Motion control skill assessment based on kinematic analysis of robotic end-effector movements. Int J Med Robot Comput Assist Surg 14(1):e1845CrossRefGoogle Scholar
  31. 31.
    Maier-Hein L, Vedula SS, Speidel S, Navab N, Kikinis R, Park A, Eisenmann M, Feu-ssner H, Forestier G, Giannarou S, Hashizume M, Katic D, Kenngott H, Kranzfelder M, Malpani A, März K, Neumuth T, Padoy N, Pugh CM, Schoch N, Stoyanov D, Taylor R, Wagner M, Hager GD, Jannin P (2017) Surgical data science for next-generation interventions. Nat Biomed Eng 1:691–696CrossRefGoogle Scholar
  32. 32.
    Mallat SG (1989) A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans Pattern Anal Mach Intell 11(7):674–693CrossRefGoogle Scholar
  33. 33.
    Malpani A, Vedula SS, Chen CCG, Hager GD (2015) A study of crowdsourced segment-level surgical skill assessment using pairwise rankings. Int J Comput Assist Radiol Surg: IJCARS 10(9):1435–1447.  https://doi.org/10.1007/s11548-015-1238-6 CrossRefGoogle Scholar
  34. 34.
    Martin JA, Regehr G, Reznick R, Macrae H, Murnaghan J, Hutchison C, Brown M (1997) Objective tructured assessment of technical skill (OSATS) for surgical residents. Br J Surg 84(2):273–278.  https://doi.org/10.1002/bjs.1800840237 CrossRefGoogle Scholar
  35. 35.
    Milovanović I, Popović DB (2012) Principal component analysis of gait kinematics data in acute and chronic stroke patients. Comput Math Methods Med 2012:8Google Scholar
  36. 36.
    Nisky I, Hsieh MH, Okamura AM (2013) A framework for analysis of surgeon arm posture variability in robot-assisted surgery. In: IEEE international conference on robotics and automation (ICRA). IEEE, pp 245–251Google Scholar
  37. 37.
    Nisky I, Hsieh MH, Okamura AM (2014) Uncontrolled manifold analysis of arm joint angle variability during robotic teleoperation and freehand movement of surgeons and novices. IEEE Trans Biomed Eng 61(12):2869–2881CrossRefGoogle Scholar
  38. 38.
    Polin MR, Siddiqui NY, Comstock BA, Hesham H, Brown C, Lendvay TS, Martino MA (2016) Crowdsourcing: a valid alternative to expert evaluation of robotic surgery skills. Am J Obstet Gynecol 215(5):644–e1CrossRefGoogle Scholar
  39. 39.
    Reiley CE, Hager GD (2009) Task versus subtask surgical skill evaluation of robotic minimally invasive surgery. In: International conference on medical image computing and computer-assisted intervention (MICCAI). Springer, pp 435–442Google Scholar
  40. 40.
    Reiley CE, Lin HC, Varadarajan B, Vagvolgyi B, Khudanpur S, Yuh DD, Hager GD (2008) Automatic recognition of surgical motions using statistical modeling for capturing variability. Stud Health Technol Inform 132(1):396–401Google Scholar
  41. 41.
    Schijven MP, Jakimowicz J, Schot C (2002) The advanced dundee endoscopic psychomotor tester (adept) objectifying subjective psychomotor test performance. Surg Endosc Other Interv Tech 16(6):943–948.  https://doi.org/10.1007/s00464-001-9146-y CrossRefGoogle Scholar
  42. 42.
    Smith S, Torkington J, Brown T, Taffinder N, Darzi A (2002) Motion analysis. Surg Endosc 16(4):640–645CrossRefGoogle Scholar
  43. 43.
    Varadarajan B, Reiley C, Lin H, Khudanpur S, Hager G (2009) Data-derived models for segmentation with application to surgical assessment and training. In: International conference on medical image computing and computer-assisted intervention (MICCAI). Springer, pp 426–434Google Scholar
  44. 44.
    Vassiliou MC, Feldman LS, Andrew CG, Bergman S, Leffondré K, Stanbridge D, Fried GM (2005) A global assessment tool for evaluation of intraoperative laparoscopic skills. Am J Surg 190(1):107–113.  https://doi.org/10.1016/j.amjsurg.2005.04.004 CrossRefGoogle Scholar
  45. 45.
    Wang Z, Fey AM (2018) Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery. Int J Comput Assist Radiol Surg.  https://doi.org/10.1007/s11548-018-1860-1 Google Scholar
  46. 46.
    White LW, Kowalewski TM, Dockter RL, Comstock B, Hannaford B, Lendvay TS (2015) Crowd-sourced assessment of technical skill: a valid method for discriminating basic robotic surgery skills. J Endourol 29(11):1295–1301CrossRefGoogle Scholar
  47. 47.
    Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227CrossRefGoogle Scholar

Copyright information

© CARS 2019

Authors and Affiliations

  1. 1.Department of Electrical EngineeringUniversity of Texas at DallasRichardsonUSA
  2. 2.Department of SurgeryUT Southwestern Medical CenterDallasUSA
  3. 3.Department of Mechanical EngineeringUniversity of Texas at DallasRichardsonUSA

Personalised recommendations