Advertisement

The development of non-contact user interface of a surgical navigation system based on multi-LSTM and a phantom experiment for zygomatic implant placement

  • Chunxia Qin
  • Xingchen Ran
  • Yiqun Wu
  • Xiaojun ChenEmail author
Original Article
  • 11 Downloads

Abstract

Purpose

Image-guided surgical navigation system (SNS) has proved to be an increasingly important assistance tool for mini-invasive surgery. However, using standard devices such as keyboard and mouse as human–computer interaction (HCI) is a latent vector of infectious medium, causing risks to patients and surgeons. To solve the human–computer interaction problem, we proposed an optimized structure of LSTM based on a depth camera to recognize gestures and applied it to an in-house oral and maxillofacial surgical navigation system (Qin et al. in Int J Comput Assist Radiol Surg 14(2):281–289, 2019).

Methods

The proposed optimized structure of LSTM named multi-LSTM allows multiple input layers and takes into account the relationships between inputs. To combine the gesture recognition with the SNS, four left-hand signs waving along four directions were designed to correspond to four operations of the mouse, and the motion of right hand was used to control the movement of the cursor. Finally, a phantom study for zygomatic implant placement was conducted to evaluate the feasibility of multi-LSTM as HCI.


Results

3D hand trajectories of both wrist and elbow from 10 participants were collected to train the recognition network. Then tenfold cross-validation was performed for judging signs, and the mean accuracy was 96% ± 3%. In the phantom study, four implants were successfully placed, and the average deviations of planned–placed implants were 1.22 mm and 1.70 mm for the entry and end points, respectively, while the angular deviation ranged from 0.4° to 2.9°.

Conclusion

The results showed that this non-contact user interface based on multi-LSTM could be used as a promising tool to eliminate the disinfection problem in operation room and alleviate manipulation complexity of surgical navigation system.

Keywords

Gesture recognition Depth camera Surgical navigation system Zygomatic implants 

Notes

Acknowledgements

This work was supported by grants from the National Key R&D Program of China (2017YFB1302903; 2017YFB1104100), the National Natural Science Foundation of China (81828003), the PHC CAI YUANPEI Program (41366SA), the Foundation of Science and Technology Commission of Shanghai Municipality (16441908400; 18511108200), and the Shanghai Jiao Tong University Foundation on Medical and Technological Joint Science Research (YG2016ZD01).

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

References

  1. 1.
    Qin C, Cao Z, Fan S, Wu Y, Sun Y, Politis C, Wang C, Chen X (2019) An oral and maxillofacial navigation system for implant placement with automatic identification of fiducial points. Int J Comput Assist Radiol Surg 14(2):281–289CrossRefGoogle Scholar
  2. 2.
    Sukegawa S, Kanno T, Furuki Y (2018) Application of computer- assisted navigation systems in oral and maxillofacial surgery. Jpn Dent Sci Rev 4(3):139–149CrossRefGoogle Scholar
  3. 3.
    Chen X, Xu L, Wang H, Wang F, Wang Q, Kikinis R (2017) Development of a surgical navigation system based on 3D Slicer for intraoperative implant placement surgery. Med Eng Phys 41:81–89CrossRefGoogle Scholar
  4. 4.
    Ebert LC, Hatch G, Thali MJ (2013) Invisible touch—control of a DICOM viewer with finger gestures using the Kinect depth camera. J Forensic Radiol Imaging 1(1):10–14CrossRefGoogle Scholar
  5. 5.
    Cheng H, Yang L, Liu Z (2016) Survey on 3D hand gesture recognition. IEEE Trans Circuits Syst Video Technol 26(9):1659–1673CrossRefGoogle Scholar
  6. 6.
    Gkalelis N, Kim H, Hilton A, Nikolaidis N, Pitas I (2009) The i3DPost multi-view and 3D human action/interaction database. In: Proc. conf. vis. media prod, pp 159–168Google Scholar
  7. 7.
    Ren Z, Yuan J, Zhang Z (2011) Robust hand gesture recognition based on finger-earth mover’s distance with a commodity depth camera. In: Proc. ACM MM, pp 1093–1096Google Scholar
  8. 8.
    Gallo L (2014) Hand shape classification using depth data for unconstrained 3D interaction. J Ambient Intell Smart Environ 6(1):93–105Google Scholar
  9. 9.
    Bhuyan MK, Ajay Kumar D, Macdorman KF, Iwahori Y (2014) A novel set of features for continuous hand gesture recognition. J Multimodal User Interfaces 8(4):333–343CrossRefGoogle Scholar
  10. 10.
    Cheng H, Luo J, Chen X (2014) A windowed dynamic time warping approach for 3D continuous hand gesture recognition. In: IEEE international conference on multimedia and expo (ICME)Google Scholar
  11. 11.
    Liou WG, Hsieh CY, Lin WY (2011) Trajectory-based sign language recognition using discriminant analysis in higher-dimensional feature space. In: Proc. IEEE ICME, pp 1–4Google Scholar
  12. 12.
    Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780CrossRefGoogle Scholar
  13. 13.
    Graves A, Jaitly N, Mohamed AR (2014) Hybrid speech recognition with deep bidirectional LSTM. In: IEEE automatic speech recognition & understanding, pp 273–278Google Scholar
  14. 14.
    Faysal U, Coskun Y, Sener BC, Atilla S (2013) Rehabilitation of posterior maxilla with zygomatic and dental implant after tumor resection: a case report. Case Rep Dent 2013:1–5Google Scholar
  15. 15.
    Aparicio C, Manresa C, Francisco K, Claros P, Alández J, González-Martín O, Albrektsson T (2000) Zygomatic implants: indications, techniques and outcomes, and the zygomatic success code. Periodontology 66:41–58CrossRefGoogle Scholar
  16. 16.
    Wang F, Monje A, Lin GH, Wu Y, Monje F, Wang HL, Davó R (2015) Reliability of four zygomatic implant-supported prostheses for the rehabilitation of the atrophic maxilla: a systematic review. Int J Oral Maxillofac Implants 30(2):293–298CrossRefGoogle Scholar
  17. 17.
    West JB, Fitzpatrick JM, Toms SA, Maurer CR Jr, Maciunas RJ (2001) Fiducial point placement and the accuracy of point-based, rigid body registration. Neurosurgery 48(4):810–816Google Scholar
  18. 18.
    Chen X, Xu L, Yang Y, Egger J (2016) A semi-automatic computer-aided method for surgical template design. Sci Rep 4(6):20280CrossRefGoogle Scholar
  19. 19.
    Bautista MA, Hernandezvela A, Escalera S, Igual L, Pujol O, Moya J, Violant V, Anguera MT (2016) A gesture recognition system for detecting behavioral patterns of ADHD. IEEE Trans Cybern 46(1):136–147CrossRefGoogle Scholar
  20. 20.
    Li YT, Wachs JP (2014) HEGM: a hierarchical elastic graph matching for hand gesture recognition. Pattern Recognit 47(1):80–88CrossRefGoogle Scholar

Copyright information

© CARS 2019

Authors and Affiliations

  • Chunxia Qin
    • 1
    • 2
  • Xingchen Ran
    • 3
  • Yiqun Wu
    • 4
  • Xiaojun Chen
    • 2
    Email author
  1. 1.School of Biomedical EngineeringShanghai Jiao Tong UniversityShanghaiChina
  2. 2.Room 805, School of Mechanical EngineeringShanghai Jiao Tong UniversityShanghaiChina
  3. 3.College of Biomedical Engineering and Instrument ScienceZhejiang UniversityZhejiangChina
  4. 4.Shanghai Ninth People’s Hospital Affiliated to Shanghai Jiao Tong University School of MedicineShanghaiChina

Personalised recommendations