Advertisement

Free-View, 3D Gaze-Guided Robotic Scrub Nurse

  • Alexandros KogkasEmail author
  • Ahmed Ezzat
  • Rudrik Thakkar
  • Ara Darzi
  • George Mylonas
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

We introduce a novel 3D gaze-guided robotic scrub nurse (RN) and test the platform in simulated surgery to determine usability and acceptability with clinical teams. Surgeons and trained scrub nurses performed an ex vivo task on pig colon. Surgeons used gaze via wearable eye-tracking glasses to select surgical instruments on a screen, in turn initiating RN to deliver the instrument. Comparison was done between human- and robot-assisted tasks (HT vs RT). Real-time gaze-screen interaction was based on a framework developed with synergy of conventional wearable eye-tracking, motion capture system and RGB-D cameras. NASA-TLX and Van der Laan’s technology acceptance questionnaires were collected and analyzed. 10 teams of surgical trainees (ST) and scrub nurses (HN) participated. Overall, NASA-TLX feedback was positive. ST and HN revealed no statistically significant difference in overall task load. Task performance feedback was unaffected. Frustration was reported by ST. Overall, Van der Laan’s scores showed positive usefulness and satisfaction scores following RN use. There was no significant difference in task interruptions across HT vs RT. Similarly, no statistical difference was found in duration to task completion in both groups. Quantitative and qualitative feedback was positive. The source of frustration has been understood. Importantly, there was no significant difference in task workflow or operative time, with overall perceptions towards task performance remaining unchanged in HT vs RT.

Keywords

Smart operating room Gaze interactions Robotic scrub nurse. 

Notes

Acknowledgements

This research project is supported by the NIHR Imperial Biomedical Research Centre (BRC).

Supplementary material

490279_1_En_19_MOESM1_ESM.zip (25.2 mb)
Supplementary material 1 (zip 25855 KB)

References

  1. 1.
    Ebert, L.C., Hatch, G., Ampanozi, G., Thali, M.J., Ross, S.: You can’t touch this: touch-free navigation through radiological images. Surg. Innov. 19(3), 301–307 (2012).  https://doi.org/10.1177/1553350611425508CrossRefGoogle Scholar
  2. 2.
    El-Shallaly, G.E.H., Mohammed, B., Muhtaseb, M.S., Hamouda, A.H., Nassar, A.H.M.: Voice recognition interfaces (VRI) optimize the utilization of theatre staff and time during laparoscopic cholecystectomy. Minim. Invasive Ther. Allied Technol. (2005).  https://doi.org/10.1080/13645700500381685CrossRefGoogle Scholar
  3. 3.
    Gillie, T., Broadbent, D.: What makes interruptions disruptive? A study of length, similarity, and complexity. Psychol. Res. (1989).  https://doi.org/10.1007/BF00309260CrossRefGoogle Scholar
  4. 4.
    Hong, N., Kim, M., Lee, C., Kim, S.: Head-mounted interface for intuitive vision control and continuous surgical operation in a surgical robot system (2018).  https://doi.org/10.1007/s11517-018-1902-4CrossRefGoogle Scholar
  5. 5.
    Jacob, M.G., Li, Y.T., Wachs, J.P.: Gestonurse: a multimodal robotic scrub nurse. In: 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), vol. 1, pp. 153–154 (2012).  https://doi.org/10.1109/ICSMC.2011.6083972
  6. 6.
    Kogkas, A.A., Darzi, A., Mylonas, G.P.: Gaze-contingent perceptually enabled interactions in the operating theatre. Int. J. Comput. Assist. Radiol. Surg. 1–10 (2017).  https://doi.org/10.1007/s11548-017-1580-yCrossRefGoogle Scholar
  7. 7.
    Laviana, A.A., Williams, S.B., King, E.D., Chuang, R.J., Hu, J.C.: Robot assisted radical prostatectomy: the new standard? Minerva urologica e nefrologica = Ital. J. Urol. Nephrol. 67(1), 47–53 (2015)Google Scholar
  8. 8.
    Makary, M.A., Daniel, M.: Medical error-the third leading cause of death in the US. BMJ (Online) (2016).  https://doi.org/10.1136/bmj.i2139CrossRefGoogle Scholar
  9. 9.
    Rivera-Rodriguez, A.J., Karsh, B.T.: Interruptions and distractions in healthcare: review and reappraisal (2010).  https://doi.org/10.1136/qshc.2009.033282CrossRefGoogle Scholar
  10. 10.
    Shah, M.: Solving the robot-world/hand-eye calibration problem using the kronecker product. J. Mech. Robot. 5(3), 31007 (2013).  https://doi.org/10.1115/1.4024473CrossRefGoogle Scholar
  11. 11.
    Treat, M.R., Amory, S.E., Downey, P.E., Taliaferro, D.A.: Initial clinical experience with a partly autonomous robotic surgical instrument server. Surg. Endosc. Other Intervent. Tech. (2006).  https://doi.org/10.1007/s00464-005-0511-0CrossRefGoogle Scholar
  12. 12.
    Velasquez, C.A., Mazhar, R., Chaikhouni, A., Zhou, T., Wachs, J.P.: Taxonomy of communications in the operating room. In: Duffy, V., Lightner, N. (eds.) AHFE 2017. AISC, vol. 590, pp. 251–262. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-60483-1_25CrossRefGoogle Scholar
  13. 13.
    Wachs, J.P., et al.: A gesture-based tool for sterile browsing of radiology images. J. Am. Med. Inf. Assoc. (2008).  https://doi.org/10.1197/jamia.M2410CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.HARMS Lab, Department of Surgery and CancerImperial College London, St Mary’s HospitalLondonUK
  2. 2.St George’s, University of LondonLondonUK
  3. 3.Department of Surgery and CancerImperial College London, St Mary’s HospitalLondonUK

Personalised recommendations